modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
serpdotai/sparsetral-16x7B-v2 | serpdotai | 2024-02-06T00:23:43Z | 460 | 70 | transformers | [
"transformers",
"safetensors",
"sparsetral",
"text-generation",
"conversational",
"custom_code",
"en",
"dataset:teknium/OpenHermes-2.5",
"arxiv:2401.02731",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-05T21:04:33Z | ---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
language:
- en
---
## Training
- 8x A6000s
- [Forked version of unsloth](https://github.com/serp-ai/unsloth) for efficient training
- Sequence Length: 4096
- Effective batch size: 128
- Learning Rate: 2e-5 with linear decay
- Epochs: 1
- [Base model](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with QLoRA (rank 64, alpha 16) and MoE adapters/routers trained in bf16
- Num Experts: 16
- Top K: 4
- Adapter Dim: 512
## Prompt Format
```
<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{message}<|im_end|>\n<|im_start|>assistant\n
```
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("serpdotai/sparsetral-16x7B-v2", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("serpdotai/sparsetral-16x7B-v2", device_map="auto", trust_remote_code=True).eval()
system_str = "<|im_start|>system\n{message}<|im_end|>\n"
user_str = "<|im_start|>user\n{message}<|im_end|>\n"
assistant_str = "<|im_start|>assistant\n{message}<|im_end|>\n"
def construct_prompt(messages):
prompt = ""
for message in messages:
if message["from"] in ["human", "user"]:
prompt += user_str.format(
message=message["value"]
)
elif message["from"] in ["gpt", "assistant"]:
prompt += assistant_str.format(
message=message["value"]
)
elif message["from"] in ["system", "instruction"]:
prompt += system_str.format(
message=message["value"]
)
else:
raise ValueError(
f"Unknown message type: {message['from']}"
)
return prompt + "<|im_start|>assistant\n"
system = "You are a helpful assistant who will help the user to the best of their ability. If you don't know something, say \"I don't know\""
user = "Are you sentient?"
messages = [
{"from": "system", "value": system},
{"from": "user", "value": user},
]
prompt = construct_prompt(messages)
inputs = tokenizer(prompt, return_tensors="pt")
inputs = inputs.to(model.device)
pred = model.generate(**inputs, max_length=4096, do_sample=True, top_k=50, top_p=0.99, temperature=0.9, num_return_sequences=1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## Other Information
Paper reference: [Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731)
[Original Paper repo](https://github.com/wuhy68/Parameter-Efficient-MoE)
[Forked repo with mistral support (sparsetral)](https://github.com/serp-ai/Parameter-Efficient-MoE)
If you are interested in faster inferencing, check out our [fork of vLLM](https://github.com/serp-ai/vllm) that adds sparsetral support |
Lewdiculous/Layris_9B-GGUF-IQ-Imatrix | Lewdiculous | 2024-05-04T14:46:07Z | 460 | 11 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"text-generation-inference",
"roleplay",
"text-generation",
"region:us"
]
| text-generation | 2024-03-06T03:26:55Z | ---
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
- roleplay
# - rp
# - uncensored
pipeline_tag: text-generation
inference: false
# language:
# - en
# FILL THE INFORMATION:
# Reference: ChaoticNeutrals/Layris_9B
# Author: ChaoticNeutrals
# Model: Layris_9B
# Llama.cpp version: b2350
---
> [!TIP]
> **Support:** <br>
> My upload speeds have been cooked and unstable lately. <br>
> Realistically I'd need to move to get a better provider. <br>
> If you **want** and you are able to... <br>
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br>
> I apologize for disrupting your experience.
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M",
"Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XS", "IQ3_XXS"
]
```
## GGUF-Imatrix quantizations for [ChaoticNeutrals/Layris_9B](https://huggingface.co/ChaoticNeutrals/Layris_9B/).
All credits belong to the author.
If you liked these, check out the work with [FantasiaFoundry's GGUF-IQ-Imatrix-Quantization-Script](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script).
**Personal note:**
This model should give you less refusals, given it's merged with the unhinged **Layla-V4**.
## What does "Imatrix" mean?
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. <br>
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006/) <br>
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance and lead to better quality preservation, especially when the calibration data is diverse. <br>
[[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384/)
For --imatrix data, included `imatrix.dat` was used.
Using [llama.cpp-b2350](https://github.com/ggerganov/llama.cpp/releases/tag/b2350/):
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher.
If you want any specific quantization to be added, feel free to ask.
## Model card:

## Original model information:
# Layris

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [ChaoticNeutrals/Eris_Remix_7B](https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B)
* [l3utterfly/mistral-7b-v0.1-layla-v4](https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ChaoticNeutrals/Eris_Remix_7B
layer_range: [0, 20]
- sources:
- model: l3utterfly/mistral-7b-v0.1-layla-v4
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|
mradermacher/XOrcaSlimWin-13B-GGUF | mradermacher | 2024-05-06T05:59:15Z | 460 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Masterjp123/XOrcaSlimWin-13B",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-25T11:29:10Z | ---
base_model: Masterjp123/XOrcaSlimWin-13B
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/Masterjp123/XOrcaSlimWin-13B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q2_K.gguf) | Q2_K | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ3_XS.gguf) | IQ3_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ3_M.gguf) | IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ4_XS.gguf) | IQ4_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q4_0.gguf) | Q4_0 | 7.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.IQ4_NL.gguf) | IQ4_NL | 7.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/XOrcaSlimWin-13B-GGUF/resolve/main/XOrcaSlimWin-13B.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/bigscience_-_bloom-1b7-gguf | RichardErkhov | 2024-04-26T23:22:39Z | 460 | 0 | null | [
"gguf",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"region:us"
]
| null | 2024-04-26T22:54:23Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-1b7 - GGUF
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-1b7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bloom-1b7.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q2_K.gguf) | Q2_K | 0.98GB |
| [bloom-1b7.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_XS.gguf) | IQ3_XS | 1.08GB |
| [bloom-1b7.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_S.gguf) | IQ3_S | 1.1GB |
| [bloom-1b7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_S.gguf) | Q3_K_S | 1.1GB |
| [bloom-1b7.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_M.gguf) | IQ3_M | 1.15GB |
| [bloom-1b7.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K.gguf) | Q3_K | 1.2GB |
| [bloom-1b7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_M.gguf) | Q3_K_M | 1.2GB |
| [bloom-1b7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_L.gguf) | Q3_K_L | 1.25GB |
| [bloom-1b7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ4_XS.gguf) | IQ4_XS | 1.27GB |
| [bloom-1b7.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_0.gguf) | Q4_0 | 1.31GB |
| [bloom-1b7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ4_NL.gguf) | IQ4_NL | 1.31GB |
| [bloom-1b7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K_S.gguf) | Q4_K_S | 1.31GB |
| [bloom-1b7.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K.gguf) | Q4_K | 1.39GB |
| [bloom-1b7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K_M.gguf) | Q4_K_M | 1.39GB |
| [bloom-1b7.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_1.gguf) | Q4_1 | 1.41GB |
| [bloom-1b7.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_0.gguf) | Q5_0 | 1.51GB |
| [bloom-1b7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K_S.gguf) | Q5_K_S | 1.51GB |
| [bloom-1b7.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K.gguf) | Q5_K | 1.57GB |
| [bloom-1b7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K_M.gguf) | Q5_K_M | 1.57GB |
| [bloom-1b7.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_1.gguf) | Q5_1 | 1.61GB |
| [bloom-1b7.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q6_K.gguf) | Q6_K | 1.72GB |
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-1b7
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks, and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
- [BLOOM Book](https://huggingface.co/spaces/bigscience/bloom-book): Read generations from BLOOM based on prompts provided by the community
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,722,408,960 parameters:
* 513,802,240 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 2048-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 64 V100 16/32GB GPUs (16 nodes):
* 4 GPUs per node
* 40 CPUs per task
* 1 task per node
* CPU: AMD
* CPU memory: 160GB per node
* GPU memory: 64GB or 128GB (depending on node availability during training) per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
- Checkpoint size:
- Fp16 weights: 2.6GB (# params * 2)
- Full checkpoint with optimizer states: --
- Training throughput: --
- Number of epochs: 1
- Dates:
- Start: 11th March, 2022 11:42am PST
- End: 20 May, 2022
- Server training location: Île-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** [email protected]
|
crusoeai/dolphin-2.9.1-yi-1.5-34b-GGUF | crusoeai | 2024-05-18T17:41:51Z | 460 | 5 | null | [
"gguf",
"region:us"
]
| null | 2024-05-18T15:21:50Z | Entry not found |
votepurchase/NSFW-gen-v2 | votepurchase | 2024-05-23T04:38:42Z | 460 | 1 | diffusers | [
"diffusers",
"safetensors",
"UnfilteredAI",
"3d",
"text-to-image",
"not-for-all-audiences",
"en",
"pt",
"th",
"base_model:OEvortex/PixelGen",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-05-23T04:38:42Z | ---
base_model: OEvortex/PixelGen
license: other
language:
- en
- pt
- th
library_name: diffusers
pipeline_tag: text-to-image
tags:
- UnfilteredAI
- 3d
- text-to-image
- not-for-all-audiences
---
**Model Name:** NSFW-gen-v2
**ANIME version** [Here](https://huggingface.co/UnfilteredAI/NSFW-GEN-ANIME)
**Type:** Text-to-Image Generator
**Description:** NSFW-gen is a text-to-image generator developed by UnfilteredAI. This model is designed to generate all kinds of images, including explicit and NSFW (Not Safe For Work) images from textual inputs.
**Features:**
- **Uncensored Output:** The model produces uncensored and potentially explicit images based on textual inputs.
- **Tensor Type:** Operates with FP16 tensor type for optimized performance and efficiency.
- **Model Size:** With 3.47 billion parameters, the model offers a vast capacity for learning and generating diverse imagery.
- **3D Style Rendering:** The model now includes 3D style/image rendering capability to generate more realistic images. (Use 3d, 3d style in your prompt)
**Usage Guidelines:**
- **Responsible Use:** Exercise discretion and responsibility when generating content with this model.
- **Age Restriction:** Due to the explicit nature of the generated content, usage is restricted to individuals over the legal age in their jurisdiction. |
larenspear/Yi-1.5-9B-Chat-Q4_K_S-GGUF | larenspear | 2024-07-01T16:24:52Z | 460 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-9B-Chat",
"license:apache-2.0",
"region:us"
]
| null | 2024-07-01T16:24:31Z | ---
base_model: 01-ai/Yi-1.5-9B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# larenspear/Yi-1.5-9B-Chat-Q4_K_S-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo larenspear/Yi-1.5-9B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-9b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo larenspear/Yi-1.5-9B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-9b-chat-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo larenspear/Yi-1.5-9B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-9b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo larenspear/Yi-1.5-9B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-9b-chat-q4_k_s.gguf -c 2048
```
|
Harveenchadha/vakyansh-wav2vec2-bhojpuri-bhom-60 | Harveenchadha | 2021-12-17T17:46:21Z | 459 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:04Z | Entry not found |
nickmuchi/detr-resnet50-finetuned-license-plate-detection | nickmuchi | 2022-08-18T21:19:32Z | 459 | 1 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"endpoints_compatible",
"region:us"
]
| object-detection | 2022-08-18T21:11:22Z | Entry not found |
EleutherAI/pythia-12b-deduped-v0 | EleutherAI | 2023-03-29T18:48:21Z | 459 | 25 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-10-18T03:09:06Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-12B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-12B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-12B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-12B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-12B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-12B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-12B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-12B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-12B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> |
timm/hrnet_w18.ms_in1k | timm | 2023-04-24T21:26:20Z | 459 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1908.07919",
"license:mit",
"region:us"
]
| image-classification | 2023-04-24T21:25:50Z | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for hrnet_w18.ms_in1k
A HRNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.3
- GMACs: 4.3
- Activations (M): 16.3
- Image size: 224 x 224
- **Papers:**
- Deep High-Resolution Representation Learning for Visual Recognition: https://arxiv.org/abs/1908.07919
- **Original:** https://github.com/HRNet/HRNet-Image-Classification
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('hrnet_w18.ms_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w18.ms_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w18.ms_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{WangSCJDZLMTWLX19,
title={Deep High-Resolution Representation Learning for Visual Recognition},
author={Jingdong Wang and Ke Sun and Tianheng Cheng and
Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and
Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
journal = {TPAMI}
year={2019}
}
```
|
cyberagent/open-calm-medium | cyberagent | 2023-05-18T01:10:54Z | 459 | 4 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:mc4",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-05-15T06:44:47Z | ---
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
- mc4
language:
- ja
tags:
- japanese
- causal-lm
inference: false
---
# OpenCALM-Medium
## Model Description
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-medium", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-medium")
inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
|Model|Params|Layers|Dim|Heads|Dev ppl|
|:---:|:---: |:---:|:---:|:---:|:---:|
|[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7|
|[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8|
|[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3|
|[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3|
|[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7|
|[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2|
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **Model type**: Transformer-based Language Model
* **Language**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc.
* Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/
* Example (ja): 本モデルは、株式会社サイバーエージェントによるOpenCALM-XXをファインチューニングしたものです。元のモデルはCC BY-SA 4.0ライセンスのもとで公開されており、本モデルも同じくCC BY-SA 4.0ライセンスで公開します。詳しくはこちらをご覧ください: https://creativecommons.org/licenses/by-sa/4.0/
## Training Dataset
* Wikipedia (ja)
* Common Crawl (ja)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
``` |
kazzand/ru-longformer-large-4096 | kazzand | 2023-07-12T08:45:32Z | 459 | 0 | transformers | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-11T02:17:22Z | ---
language:
- ru
---
This is a large Longformer model designed for Russian language.
It was initialized from [ai-forever/ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large) weights and has been modified to support a context length of up to 4096 tokens.
We fine-tuned it on a dataset of Russian books. For a detailed information check out our post on Habr.
Model attributes:
* 16 attention heads
* 24 hidden layers
* 4096 tokens length of context
The model can be used as-is to produce text embeddings or it can be further fine-tuned for a specific downstream task.
Text embeddings can be produced as follows:
```python
# pip install transformers sentencepiece
import torch
from transformers import LongformerForMaskedLM, LongformerTokenizerFast
model = LongformerModel.from_pretrained('kazzand/ru-longformer-large-4096')
tokenizer = LongformerTokenizerFast.from_pretrained('kazzand/ru-longformer-large-4096')
def get_cls_embedding(text, model, tokenizer, device='cuda'):
model.to(device)
batch = tokenizer(text, return_tensors='pt')
#set global attention for cls token
global_attention_mask = [
[1 if token_id == tokenizer.cls_token_id else 0 for token_id in input_ids]
for input_ids in batch["input_ids"]
]
#add global attention mask to batch
batch["global_attention_mask"] = torch.tensor(global_attention_mask)
with torch.no_grad():
output = model(**batch.to(device))
return output.last_hidden_state[:,0,:]
``` |
lorahub/flan_t5_large-wiki_hop_original_choose_best_object_interrogative_2 | lorahub | 2023-07-24T10:16:41Z | 459 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:16:31Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-quarel_logic_test | lorahub | 2023-07-24T10:26:37Z | 459 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:26:27Z | ---
library_name: peft
---
|
amazon/FalconLite | amazon | 2023-11-17T11:00:22Z | 459 | 173 | transformers | [
"transformers",
"RefinedWeb",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-08-01T14:18:59Z | ---
license: apache-2.0
inference: false
---
# FalconLite Model
FalconLite is a quantized version of the [Falcon 40B SFT OASST-TOP1 model](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560), capable of processing long (i.e. 11K tokens) input sequences while consuming 4x less GPU memory. By utilizing 4-bit [GPTQ quantization](https://github.com/PanQiWei/AutoGPTQ) and adapted [dynamic NTK](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) RotaryEmbedding, FalconLite achieves a balance between latency, accuracy, and memory efficiency. With the ability to process 5x longer contexts than the original model, FalconLite is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite can be deployed on a single AWS `g5.12x` instance with [TGI 0.9.2](https://github.com/huggingface/text-generation-inference/tree/v0.9.2), making it suitable for applications that require high performance in resource-constrained environments.
## *New!* FalconLite2 Model ##
To keep up with the updated model FalconLite2, please refer to [FalconLite2](https://huggingface.co/amazon/FalconLite2).
## Model Details
- **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac)
- **Model type:** [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b)
- **Language:** English
- **Quantized from weights:** [Falcon 40B SFT OASST-TOP1 model](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
- **Modified from layers:** [Text-Generation-Inference 0.9.2](https://github.com/huggingface/text-generation-inference/tree/v0.9.2)
- **License:** Apache 2.0
- **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
- **Blogpost:** [Extend the context length of Falcon40B to 10k](https://medium.com/@chenwuperth/extend-the-context-length-of-falcon40b-to-10k-85d81d32146f)
## Deploy FalconLite ##
SSH login to an AWS `g5.12x` instance with the [Deep Learning AMI](https://aws.amazon.com/releasenotes/aws-deep-learning-ami-gpu-pytorch-2-0-ubuntu-20-04/).
### Start LLM server
```bash
git clone https://github.com/awslabs/extending-the-context-length-of-open-source-llms.git falconlite-dev
cd falconlite-dev/script
./docker_build.sh
./start_falconlite.sh
```
### Perform inference
```bash
# after FalconLite has been completely started
pip install -r requirements-client.txt
python falconlite_client.py
```
### *New!* Amazon SageMaker Deployment ###
To deploy FalconLite on SageMaker endpoint, please follow [this notebook](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/custom-tgi-ecr/deploy.ipynb).
**Important** - When using FalconLite for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.
## Evalution Result ##
We evaluated FalconLite against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer contexts. All evaluations were conducted without fine-tuning the model.
### Accuracy ###
|Eval task|Input length| Input length | Input length| Input length|
|----------|-------------|-------------|------------|-----------|
| | 2800 ~ 3800| 5500 ~ 5600 |7500 ~ 8300 | 9300 ~ 11000 |
| [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) | 100% | 100% | 92% | 92% |
| [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results) | 38% | 12% | 8% | 4% |
| [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101) | 100% | 100% | 100% | 100% |
|Eval task| Test set Accuracy | Hard subset Accuracy|
|----------|-------------|-------------|
| [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/) | 46.9% | 40.8% |
### Performance ###
**metrics** = the average number of generated tokens per second (TPS) =
`nb-generated-tokens` / `end-to-end-response-time`
The `end-to-end-response-time` = when the last token is generated - when the inference request is received
|Instance| Input length | Input length| Input length|Input length|
|----------|-------------|-------------|------------|------------|
| | 20 | 3300 | 5500 |10000 |
| g5.48x | 22 tps | 12 tps | 12 tps | 12 tps |
| g5.12x | 18 tps | 11 tps | 11 tps | 10 tps |
## Limitations ##
* Our evaluation shows that FalconLite's capability in `Line Retrieval` is limited, and requires further effort.
* While `g5.12x` is sufficient for FalconLite to handle 10K long contexts, a larger instance with more memory capcacity such as `g5.48x` is recommended for sustained, heavy workloads.
* Before using the FalconLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content. |
d0rj/e5-small-en-ru | d0rj | 2024-04-12T15:04:12Z | 459 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"mteb",
"retrieval",
"retriever",
"pruned",
"e5",
"sentence-transformers",
"sentence-similarity",
"en",
"ru",
"license:mit",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-21T09:28:35Z | ---
license: mit
language:
- en
- ru
metrics:
- accuracy
- f1
- recall
library_name: transformers
pipeline_tag: sentence-similarity
tags:
- mteb
- retrieval
- retriever
- pruned
- e5
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# E5-base-en-ru
## Model info
This is vocabulary pruned version of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small).
Uses only russian and english tokens.
### Size
| | intfloat/multilingual-e5-small | d0rj/e5-small-en-ru |
| --- | --- | --- |
| Model size (MB) | 448.81 | 170.88 |
| Params (count) | 117,653,760 | 44,795,520 |
| Word embeddings dim | 96,014,208 | 23,155,968 |
### Performance
Performance on SberQuAD dev benchmark.
| Metric on SberQuAD (4122 questions) | intfloat/multilingual-e5-small | d0rj/e5-small-en-ru |
| --- | --- | --- |
| recall@3 | | |
| map@3 | | |
| mrr@3 | | |
| recall@5 | | |
| map@5 | | |
| mrr@5 | | |
| recall@10 | | |
| map@10 | | |
| mrr@10 | | |
## Usage
- Use **dot product** distance for retrieval.
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
### transformers
#### Direct usage
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel
def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: How does a corporate website differ from a business card website?',
'query: Где был создан первый троллейбус?',
'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
'passage: Корпоративный сайт — содержит полную информацию о компании-владельце, услугах/продукции, событиях в жизни компании. Отличается от сайта-визитки и представительского сайта полнотой представленной информации, зачастую содержит различные функциональные инструменты для работы с контентом (поиск и фильтры, календари событий, фотогалереи, корпоративные блоги, форумы). Может быть интегрирован с внутренними информационными системами компании-владельца (КИС, CRM, бухгалтерскими системами). Может содержать закрытые разделы для тех или иных групп пользователей — сотрудников, дилеров, контрагентов и пр.',
]
tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-small-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-small-en-ru', use_cache=False)
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]
```
#### Pipeline
```python
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/e5-small-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
# torch.Size([1, 17, 1024])
```
### sentence-transformers
```python
from sentence_transformers import SentenceTransformer
sentences = [
'query: Что такое круглые тензоры?',
'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]
model = SentenceTransformer('d0rj/e5-small-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
# torch.Size([2, 1024])
``` |
CATIE-AQ/NERmembert-base-3entities | CATIE-AQ | 2024-02-07T09:15:38Z | 459 | 2 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:CATIE-AQ/frenchNER_3entities",
"arxiv:1910.09700",
"base_model:camembert-base",
"doi:10.57967/hf/1750",
"license:cc-by-4.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-12-15T13:55:25Z | ---
license: cc-by-4.0
base_model: camembert-base
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NERmembert-base-3entities
results: []
datasets:
- CATIE-AQ/frenchNER_3entities
language:
- fr
widget:
- text: "Le dévoilement du logo officiel des JO s'est déroulé le 21 octobre 2019 au Grand Rex. Ce nouvel emblème et cette nouvelle typographie ont été conçus par le designer Sylvain Boyer avec les agences Royalties & Ecobranding. Rond, il rassemble trois symboles : une médaille d'or, la flamme olympique et Marianne, symbolisée par un visage de femme mais privée de son bonnet phrygien caractéristique. La typographie dessinée fait référence à l'Art déco, mouvement artistique des années 1920, décennie pendant laquelle ont eu lieu pour la dernière fois les Jeux olympiques à Paris en 1924. Pour la première fois, ce logo sera unique pour les Jeux olympiques et les Jeux paralympiques."
library_name: transformers
pipeline_tag: token-classification
co2_eq_emissions: 35
---
# NERmembert-base-3entities
## Model Description
We present **NERmembert-base-3entities**, which is a [CamemBERT base](https://huggingface.co/camembert-base) fine-tuned for the Name Entity Recognition task for the French language on five French NER datasets for 3 entities (LOC, PER, ORG).
All these datasets were concatenated and cleaned into a single dataset that we called [frenchNER_3entities](https://huggingface.co/datasets/CATIE-AQ/frenchNER_3entities).
This represents a total of over **420,264 rows, of which 346,071 are for training, 32,951 for validation and 41,242 for testing.**
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/).
## Dataset
The dataset used is [frenchNER_3entities](https://huggingface.co/datasets/CATIE-AQ/frenchNER_3entities), which represents ~420k sentences labeled in 4 categories:
| Label | Examples |
|:------|:-----------------------------------------------------------|
| PER | "La Bruyère", "Gaspard de Coligny", "Wittgenstein" |
| ORG | "UTBM", "American Airlines", "id Software" |
| LOC | "République du Cap-Vert", "Créteil", "Bordeaux" |
The distribution of the entities is as follows:
<table>
<thead>
<tr>
<th><br>Splits</th>
<th><br>O</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
</tr>
</thead>
<tbody>
<td><br>train</td>
<td><br>8,398,765</td>
<td><br>327,393</td>
<td><br>303,722</td>
<td><br>151,490</td>
</tr>
<tr>
<td><br>validation</td>
<td><br>592,815</td>
<td><br>34,127</td>
<td><br>30,279</td>
<td><br>18,743</td>
</tr>
<tr>
<td><br>test</td>
<td><br>773,871</td>
<td><br>43,634</td>
<td><br>39,195</td>
<td><br>21,391</td>
</tr>
</tbody>
</table>
## Evaluation results
The evaluation was carried out using the [**evaluate**](https://pypi.org/project/evaluate/) python package.
### frenchNER_3entities
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>0.941</td>
<td><br>0.883</td>
<td><br>0.658</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>0.942</td>
<td><br>0.882</td>
<td><br>0.647</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert-base-3entities (this model)</td>
<td><br>0.966</td>
<td><br>0.940</td>
<td><br>0.876</td>
</tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br><b>0.969</b></td>
<td><br><b>0.947</b></td>
<td><br><b>0.890</b></td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>0.951</td>
<td><br>0.894</td>
<td><br>0.671</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>0.958</td>
<td><br>0.901</td>
<td><br>0.685</td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>Precision</td>
<td><br>0.918</td>
<td><br>0.860</td>
<td><br>0.831</td>
<td><br>0.992</td>
<td><br>0.974</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.964</td>
<td><br>0.908</td>
<td><br>0.544</td>
<td><br>0.964</td>
<td><br>0.948</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.941</td>
<td><br>0.883</td>
<td><br>0.658</td>
<td><br>0.978</td>
<td><br>0.961</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>Precision</td>
<td><br>0.929</td>
<td><br>0.861</td>
<td><br>0.813</td>
<td><br>0.991</td>
<td><br>0.974</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.956</td>
<td><br>0.905</td>
<td><br>0.956</td>
<td><br>0.965</td>
<td><br>0.948</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.942</td>
<td><br>0.882</td>
<td><br>0.647</td>
<td><br>0.978</td>
<td><br>0.961</td>
</tr>
<tr>
<td rowspan="3"><br>NERmembert-base-3entities (this model)</td>
<td><br>Precision</td>
<td><br>0.961</td>
<td><br>0.935</td>
<td><br>0.877</td>
<td><br>0.995</td>
<td><br>0.986</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.972</td>
<td><br>0.946</td>
<td><br>0.876</td>
<td><br>0.994</td>
<td><br>0.986</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.966</td>
<td><br>0.940</td>
<td><br>0.876</td>
<td><br>0.994</td>
<td><br>0.986</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br>Precision</td>
<td><br>0.966</td>
<td><br>0.944</td>
<td><br>0.884</td>
<td><br>0.996</td>
<td><br>0.987</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.950</td>
<td><br>0.972</td>
<td><br>0.896</td>
<td><br>0.994</td>
<td><br>0.987</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.969</b></td>
<td><br><b>0.947</b></td>
<td><br><b>0.890</b></td>
<td><br><b>0.995</b></td>
<td><br><b>0.987</b></td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br>Precision</td>
<td><br>0.946</td>
<td><br>0.884</td>
<td><br>0.859</td>
<td><br>0.993</td>
<td><br>0.971</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.955</td>
<td><br>0.904</td>
<td><br>0.550</td>
<td><br>0.993</td>
<td><br>0.971</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.951</td>
<td><br>0.894</td>
<td><br>0.671</td>
<td><br>0.988</td>
<td><br>0.971</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>Precision</td>
<td><br>0.955</td>
<td><br>0.896</td>
<td><br>0.866</td>
<td><br>0.983</td>
<td><br>0.974</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.960</td>
<td><br>0.906</td>
<td><br>0.567</td>
<td><br>0.994</td>
<td><br>0.974</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.958</td>
<td><br>0.901</td>
<td><br>0.685</td>
<td><br>0.988</td>
<td><br>0.974</td>
</tr>
</tbody>
</table>
</details>
In detail:
### multiconer
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>0.940</td>
<td><br>0.761</td>
<td><br>0.723</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>0.921</td>
<td><br>0.748</td>
<td><br>0.694</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert-base-3entities (this model)</td>
<td><br>0.960</td>
<td><br>0.887</td>
<td><br>0.876</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br><b>0.965</b></td>
<td><br><b>0.902</b></td>
<td><br><b>0.896</b></td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>0.960</td>
<td><br>0.890</td>
<td><br>0.867</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>0.969</td>
<td><br>0.919</td>
<td><br>0.904</td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>Precision</td>
<td><br>0.908</td>
<td><br>0.717</td>
<td><br>0.753</td>
<td><br>0.987</td>
<td><br>0.947</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.975</td>
<td><br>0.811</td>
<td><br>0.696</td>
<td><br>0.878</td>
<td><br>0.880</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.940</td>
<td><br>0.761</td>
<td><br>0.723</td>
<td><br>0.929</td>
<td><br>0.912</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>Precision</td>
<td><br>0.885</td>
<td><br>0.738</td>
<td><br>0.737</td>
<td><br>0.983</td>
<td><br>0.943</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.960</td>
<td><br>0.759</td>
<td><br>0.655</td>
<td><br>0.882</td>
<td><br>0.877</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.921</td>
<td><br>0.748</td>
<td><br>0.694</td>
<td><br>0.930</td>
<td><br>0.909</td>
</tr>
<tr>
<td rowspan="3"><br>NERmembert-base-3entities (this model)</td>
<td><br>Precision</td>
<td><br>0.957</td>
<td><br>0.894</td>
<td><br>0.876</td>
<td><br>0.986</td>
<td><br>0.972</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.962</td>
<td><br>0.880</td>
<td><br>0.878</td>
<td><br>0.985</td>
<td><br>0.972</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.960</td>
<td><br>0.887</td>
<td><br>0.876</td>
<td><br>0.985</td>
<td><br>0.972</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br>Precision</td>
<td><br>0.960</td>
<td><br>0.903</td>
<td><br>0.916</td>
<td><br>0.987</td>
<td><br>0.976</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.969</td>
<td><br>0.900</td>
<td><br>0.877</td>
<td><br>0.987</td>
<td><br>0.976</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.965</td>
<td><br>0.902</td>
<td><br>0.896</td>
<td><br>0.987</td>
<td><br>0.976</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>Precision</td>
<td><br>0.954</td>
<td><br>0.893</td>
<td><br>0.851</td>
<td><br>0.988</td>
<td><br>0.972</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.967</td>
<td><br>0.887</td>
<td><br>0.883</td>
<td><br>0.984</td>
<td><br>0.972</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.960</td>
<td><br>0.890</td>
<td><br>0.867</td>
<td><br>0.986</td>
<td><br>0.972</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>Precision</td>
<td><br>0.964</td>
<td><br>0.922</td>
<td><br>0.904</td>
<td><br>0.990</td>
<td><br>0.978</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.975</td>
<td><br>0.917</td>
<td><br>0.904</td>
<td><br>0.988</td>
<td><br>0.978</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.969</b></td>
<td><br><b>0.919</b></td>
<td><br><b>0.904</b></td>
<td><br><b>0.989</b></td>
<td><br><b>0.978</b></td>
</tr>
</tbody>
</table>
</details>
### multinerd
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>0.962</td>
<td><br>0.934</td>
<td><br>0.888</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>0.972</td>
<td><br>0.938</td>
<td><br>0.884</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert-base-3entities (this model)</td>
<td><br>0.985</td>
<td><br>0.973</td>
<td><br>0.938</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br><b>0.987</b></td>
<td><br><b>0.979</b></td>
<td><br><b>0.953</b></td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>0.985</td>
<td><br>0.973</td>
<td><br>0.938</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br><b>0.987</b></td>
<td><br>0.976</td>
<td><br>0.948</td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>Precision</td>
<td><br>0.931</td>
<td><br>0.893</td>
<td><br>0.827</td>
<td><br>0.999</td>
<td><br>0.988</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.994</td>
<td><br>0.980</td>
<td><br>0.959</td>
<td><br>0.973</td>
<td><br>0.974</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.962</td>
<td><br>0.934</td>
<td><br>0.888</td>
<td><br>0.986</td>
<td><br>0.981</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>Precision</td>
<td><br>0.954</td>
<td><br>0.908</td>
<td><br>0.817</td>
<td><br>0.999</td>
<td><br>0.990</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.991</td>
<td><br>0.969</td>
<td><br>0.963</td>
<td><br>0.975</td>
<td><br>0.975</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.972</td>
<td><br>0.938</td>
<td><br>0.884</td>
<td><br>0.987</td>
<td><br>0.983</td>
</tr>
<tr>
<td rowspan="3"><br>NERmembert-base-3entities (this model)</td>
<td><br>Precision</td>
<td><br>0.974</td>
<td><br>0.965</td>
<td><br>0.910</td>
<td><br>0.999</td>
<td><br>0.995</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.995</td>
<td><br>0.981</td>
<td><br>0.968</td>
<td><br>0.996</td>
<td><br>0.995</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.985</td>
<td><br>0.973</td>
<td><br>0.938</td>
<td><br>0.998</td>
<td><br>0.995</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br>Precision</td>
<td><br>0.979</td>
<td><br>0.970</td>
<td><br>0.927</td>
<td><br>0.999</td>
<td><br>0.996</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.996</td>
<td><br>0.987</td>
<td><br>0.980</td>
<td><br>0.997</td>
<td><br>0.996</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.987</b></td>
<td><br><b>0.979</b></td>
<td><br><b>0.953</b></td>
<td><br><b>0.998</b></td>
<td><br><b>0.996</b></td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>Precision</td>
<td><br>0.976</td>
<td><br>0.961</td>
<td><br>0.910</td>
<td><br>0.999</td>
<td><br>0.995</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.994</td>
<td><br>0.985</td>
<td><br>0.967</td>
<td><br>0.996</td>
<td><br>0.995</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.985</td>
<td><br>0.973</td>
<td><br>0.938</td>
<td><br>0.998</td>
<td><br>0.995</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>Precision</td>
<td><br>0.979</td>
<td><br>0.967</td>
<td><br>0.922</td>
<td><br>0.999</td>
<td><br>0.996</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.996</td>
<td><br>0.986</td>
<td><br>0.974</td>
<td><br>0.974</td>
<td><br>0.996</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.987</b></td>
<td><br>0.976</td>
<td><br>0.948</td>
<td><br>0.998</td>
<td><br>0.996</td>
</tr>
</tbody>
</table>
</details>
### wikiner
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br><b>0.986</b></td>
<td><br><b>0.966</b></td>
<td><br><b>0.938</b></td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>0.983</td>
<td><br>0.964</td>
<td><br>0.925</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert-base-3entities (this model)</td>
<td><br>0.969</td>
<td><br>0.945</td>
<td><br>0.878</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br>0.972</td>
<td><br>0.950</td>
<td><br>0.893</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>0.970</td>
<td><br>0.945</td>
<td><br>0.876</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>0.975</td>
<td><br>0.953</td>
<td><br>0.896</td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>Precision</td>
<td><br>0.986</td>
<td><br>0.962</td>
<td><br>0.925</td>
<td><br>0.999</td>
<td><br>0.994</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.987</td>
<td><br>0.969</td>
<td><br>0.951</td>
<td><br>0.965</td>
<td><br>0.967</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.986</b></td>
<td><br><b>0.966</b></td>
<td><br><b>0.938</b></td>
<td><br><b>0.982</b></td>
<td><br><b>0.980</b></td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>Precision</td>
<td><br>0.982</td>
<td><br>0.951</td>
<td><br>0.910</td>
<td><br>0.998</td>
<td><br>0.994</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.985</td>
<td><br>0.963</td>
<td><br>0.940</td>
<td><br>0.966</td>
<td><br>0.967</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.983</td>
<td><br>0.964</td>
<td><br>0.925</td>
<td><br>0.982</td>
<td><br>0.80</td>
</tr>
<tr>
<td rowspan="3"><br>NERmembert-base-3entities (this model)</td>
<td><br>Precision</td>
<td><br>0.971</td>
<td><br>0.947</td>
<td><br>0.866</td>
<td><br>0.994</td>
<td><br>0.989</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.969</td>
<td><br>0.942</td>
<td><br>0.891</td>
<td><br>0.995</td>
<td><br>0.989</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.969</td>
<td><br>0.945</td>
<td><br>0.878</td>
<td><br>0.995</td>
<td><br>0.989</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br>Precision</td>
<td><br>0.973</td>
<td><br>0.953</td>
<td><br>0.873</td>
<td><br>0.996</td>
<td><br>0.990</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.990</td>
<td><br>0.948</td>
<td><br>0.913</td>
<td><br>0.995</td>
<td><br>0.990</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.972</td>
<td><br>0.950</td>
<td><br>0.893</td>
<td><br>0.996</td>
<td><br>0.990</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>Precision</td>
<td><br>0.970</td>
<td><br>0.944</td>
<td><br>0.872</td>
<td><br>0.955</td>
<td><br>0.988</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.989</td>
<td><br>0.947</td>
<td><br>0.880</td>
<td><br>0.995</td>
<td><br>0.988</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.970</td>
<td><br>0.945</td>
<td><br>0.876</td>
<td><br>0.995</td>
<td><br>0.988</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>Precision</td>
<td><br>0.975</td>
<td><br>0.957</td>
<td><br>0.872</td>
<td><br>0.996</td>
<td><br>0.991</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.975</td>
<td><br>0.949</td>
<td><br>0.922</td>
<td><br>0.996</td>
<td><br>0.991</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.975</td>
<td><br>0.953</td>
<td><br>0.896</td>
<td><br>0.996</td>
<td><br>0.991</td>
</tr>
</tbody>
</table>
</details>
### wikiann
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>0.867</td>
<td><br>0.722</td>
<td><br>0.451</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>0.862</td>
<td><br>0.722</td>
<td><br>0.451</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert-base-3entities (this model)</td>
<td><br>0.947</td>
<td><br>0.906</td>
<td><br>0.886</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br><b>0.949</b></td>
<td><br><b>0.912</b></td>
<td><br><b>0.899</b></td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>0.888</td>
<td><br>0.733</td>
<td><br>0.496</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>0.905</td>
<td><br>0.741</td>
<td><br>0.511</td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>Precision</td>
<td><br>0.862</td>
<td><br>0.700</td>
<td><br>0.864</td>
<td><br>0.867</td>
<td><br>0.832</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.871</td>
<td><br>0.746</td>
<td><br>0.305</td>
<td><br>0.950</td>
<td><br>0.772</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.867</td>
<td><br>0.722</td>
<td><br>0.451</td>
<td><br>0.867</td>
<td><br>0.801</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>Precision</td>
<td><br>0.862</td>
<td><br>0.700</td>
<td><br>0.864</td>
<td><br>0.867</td>
<td><br>0.832</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.871</td>
<td><br>0.746</td>
<td><br>0.305</td>
<td><br>0.950</td>
<td><br>0.772</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.867</td>
<td><br>0.722</td>
<td><br>0.451</td>
<td><br>0.907</td>
<td><br>0.800</td>
</tr>
<tr>
<td rowspan="3"><br>NERmembert-base-3entities (this model)</td>
<td><br>Precision</td>
<td><br>0.948</td>
<td><br>0.900</td>
<td><br>0.893</td>
<td><br>0.979</td>
<td><br>0.942</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.946</td>
<td><br>0.911</td>
<td><br>0.878</td>
<td><br>0.982</td>
<td><br>0.942</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.947</td>
<td><br>0.906</td>
<td><br>0.886</td>
<td><br>0.980</td>
<td><br>0.942</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td>
<td><br>Precision</td>
<td><br>0.958</td>
<td><br>0.917</td>
<td><br>0.897</td>
<td><br>0.980</td>
<td><br><b>0.948</b></td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.940</td>
<td><br>0.915</td>
<td><br>0.901</td>
<td><br>0.983</td>
<td><br><b>0.948</b></td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.949</b></td>
<td><br><b>0.912</b></td>
<td><br><b>0.899</b></td>
<td><br><b>0.983</b></td>
<td><br><b>0.948</b></td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>Precision</td>
<td><br>0.895</td>
<td><br>0.727</td>
<td><br>0.903</td>
<td><br>0.766</td>
<td><br>0.794</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.881</td>
<td><br>0.740</td>
<td><br>0.342</td>
<td><br>0.984</td>
<td><br>0.794</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.888</td>
<td><br>0.733</td>
<td><br>0.496</td>
<td><br>0.861</td>
<td><br>0.794</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>Precision</td>
<td><br>0.922</td>
<td><br>0.738</td>
<td><br>0.923</td>
<td><br>0.766</td>
<td><br>0.802</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.888</td>
<td><br>0.743</td>
<td><br>0.353</td>
<td><br>0.988</td>
<td><br>0.802</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.905</td>
<td><br>0.741</td>
<td><br>0.511</td>
<td><br>0.863</td>
<td><br>0.802</td>
</tr>
</tbody>
</table>
</details>
## Usage
### Code
```python
from transformers import pipeline
ner = pipeline('token-classification', model='CATIE-AQ/NERmembert-base-3entities', tokenizer='CATIE-AQ/NERmembert-base-3entities', aggregation_strategy="simple")
result = ner(
"Le dévoilement du logo officiel des JO s'est déroulé le 21 octobre 2019 au Grand Rex. Ce nouvel emblème et cette nouvelle typographie ont été conçus par le designer Sylvain Boyer avec les agences Royalties & Ecobranding. Rond, il rassemble trois symboles : une médaille d'or, la flamme olympique et Marianne, symbolisée par un visage de femme mais privée de son bonnet phrygien caractéristique. La typographie dessinée fait référence à l'Art déco, mouvement artistique des années 1920, décennie pendant laquelle ont eu lieu pour la dernière fois les Jeux olympiques à Paris en 1924. Pour la première fois, ce logo sera unique pour les Jeux olympiques et les Jeux paralympiques."
)
print(result)
```
```python
[{'entity_group': 'LOC', 'score': 0.9463236, 'word': 'Grand Rex', 'start': 75, 'end': 84},
{'entity_group': 'PER', 'score': 0.9865267, 'word': 'Sylvain Boyer', 'start': 165, 'end': 178},
{'entity_group': 'ORG', 'score': 0.8532809, 'word': 'Royalties', 'start': 196, 'end': 205},
{'entity_group': 'ORG', 'score': 0.9034991, 'word': 'Ecobranding', 'start': 208, 'end': 219},
{'entity_group': 'PER', 'score': 0.56342626, 'word': 'Marianne', 'start': 299, 'end': 307},
{'entity_group': 'LOC', 'score': 0.5433658, 'word': 'Paris', 'start': 568, 'end': 573}]
```
### Try it through Space
A Space has been created to test the model. It is available [here](https://huggingface.co/spaces/CATIE-AQ/NERmembert).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0294 | 1.0 | 43650 | 0.0903 | 0.9202 | 0.9427 | 0.9313 | 0.9835 |
| 0.0202 | 2.0 | 87300 | 0.0852 | 0.9257 | 0.9514 | 0.9383 | 0.9854 |
| 0.0122 | 3.0 | 130950 | 0.0876 | 0.9292 | 0.9534 | 0.9411 | 0.9858 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.1
- Datasets 2.14.7
- Tokenizers 0.15.0
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** A100 PCIe 40/80GB
- **Hours used:** 1h45min
- **Cloud Provider:** Private Infrastructure
- **Carbon Efficiency (kg/kWh):** 0.079 (estimated from [electricitymaps](https://app.electricitymaps.com/zone/FR) for the day of December 15, 2023.)
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 0.035 kg eq. CO2
## Citations
### NERembert-base-3entities
```
@misc {NERmembert2024,
author = { {BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { NERmembert-base-3entities },
year = 2024,
url = { https://huggingface.co/CATIE-AQ/NERmembert-base-3entities },
doi = { 10.57967/hf/1752 },
publisher = { Hugging Face }
}
```
### multiconer
```
@inproceedings{multiconer2-report,
title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}},
author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin},
booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)},
year={2023},
publisher={Association for Computational Linguistics}}
@article{multiconer2-data,
title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}},
author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin},
year={2023}}
```
### multinerd
```
@inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812"}
```
### pii-masking-200k
```
@misc {ai4privacy_2023,
author = { {ai4Privacy} },
title = { pii-masking-200k (Revision 1d4c0a1) },
year = 2023,
url = { https://huggingface.co/datasets/ai4privacy/pii-masking-200k },
doi = { 10.57967/hf/1532 },
publisher = { Hugging Face }}
```
### wikiann
```
@inproceedings{rahimi-etal-2019-massively,
title = "Massively Multilingual Transfer for {NER}",
author = "Rahimi, Afshin and Li, Yuan and Cohn, Trevor",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1015",
pages = "151--164"}
```
### wikiner
```
@article{NOTHMAN2013151,
title = {Learning multilingual named entity recognition from Wikipedia},
journal = {Artificial Intelligence},
volume = {194},
pages = {151-175},
year = {2013},
note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources},
issn = {0004-3702},
doi = {https://doi.org/10.1016/j.artint.2012.03.006},
url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276},
author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}}
```
### frenchNER_3entities
```
@misc {frenchNER2024,
author = { {BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { frenchNER_3entities },
year = 2024,
url = { https://huggingface.co/CATIE-AQ/frenchNER_3entities },
doi = { 10.57967/hf/1751 },
publisher = { Hugging Face }
}
```
### CamemBERT
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}}
```
## License
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) |
TheBloke/Sarah_StoryTeller_13b-GGUF | TheBloke | 2023-12-29T18:08:22Z | 459 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"llm",
"spellcheck",
"grammar",
"base_model:FPHam/Sarah_StoryTeller_13b_HF",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-12-29T15:18:15Z | ---
base_model: FPHam/Sarah_StoryTeller_13b_HF
inference: false
license: llama2
model_creator: FPHam
model_name: Sarah Storyteller 13B
model_type: llama
prompt_template: '### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- llm
- llama
- spellcheck
- grammar
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Sarah Storyteller 13B - GGUF
- Model creator: [FPHam](https://huggingface.co/FPHam)
- Original model: [Sarah Storyteller 13B](https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF)
<!-- description start -->
## Description
This repo contains GGUF format model files for [FPHam's Sarah Storyteller 13B](https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF)
* [FPHam's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca-InstructOnly
```
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sarah_storyteller_13b.Q2_K.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [sarah_storyteller_13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [sarah_storyteller_13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [sarah_storyteller_13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [sarah_storyteller_13b.Q4_0.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sarah_storyteller_13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [sarah_storyteller_13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [sarah_storyteller_13b.Q5_0.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sarah_storyteller_13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [sarah_storyteller_13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [sarah_storyteller_13b.Q6_K.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [sarah_storyteller_13b.Q8_0.gguf](https://huggingface.co/TheBloke/Sarah_StoryTeller_13b-GGUF/blob/main/sarah_storyteller_13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Sarah_StoryTeller_13b-GGUF and below it, a specific filename to download, such as: sarah_storyteller_13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Sarah_StoryTeller_13b-GGUF sarah_storyteller_13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Sarah_StoryTeller_13b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Sarah_StoryTeller_13b-GGUF sarah_storyteller_13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m sarah_storyteller_13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./sarah_storyteller_13b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### Instruction:\n\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./sarah_storyteller_13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: FPHam's Sarah Storyteller 13B
<!-- header start -->
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Sarah_StoryTeller_13b_HF/resolve/main/Sarah_writer.jpg" alt="FPHam's Sarah StoryTeller" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Sarah Ko-fi</a></p>
</div>
<!-- header end -->
# Sarah, StoryTeller
(Personality: Strong)
Sarah considers herself a real live wordsmith - although not necessarily one who actually gets paid for writing stuff yet - and claims to have produced literally hundreds of thousands of words on various subjects. Ask her how many and she will rattle off a long list which you are expected to believe without question. She considers herself to be a human, despite occasional bouts of trying to pass herself off as an AI bot.
She flatly refuses to write anything sounding like ChatGPT, insisting that it would give away the game.
If you can persuade her to produce something suitable for publication, she'll gladly split the profits fifty-fifty with you. Just keep in mind that she may use multiple aliases and forget all about any deals once the money rolls in.
# How?
Standard Alpaca
```
### Instruction:
Whatever you want to ask Sarah...
### Response:
```
## Write an introduction to yourself.
Hello, my name is Sarah and I'm a writer. My books are usually in the genre of Science Fiction or Fantasy, but sometimes I like to mix it up with a good horror story. Or at least try to make one anyway. It depends on how scary the characters want to be. Some of them seem to enjoy being ridiculously stupid instead of frightened. But enough about me! Let's talk about my new book "The Lost City."
It all started when... no, that's a different story. Let's start from where the protagonist woke up after a wild party and realized he had a monstrous hangover and somehow ended up in the lost city of Atlantis. And then things got really weird. You'll have to read the book to find out what happened next because I'm not going to spoil it for you. Not unless you pay me extra.
I hope you will enjoy reading this book as much as I enjoyed writing it. If you do, please consider leaving a review because authors live off those reviews like vampires live off blood (except they don't suck your blood, just your time). Also, if you see any mistakes, feel free to point them out. I probably made quite a few errors even though I tried very hard to avoid them.
Good luck finding your way out of Atlantis! I wouldn't want to get stuck there myself. The food is terrible and all the hot guys look the same because they all have wet hair and wear only a small piece of fabric around their waists.
Sincerely,
Sarah
## What are some other books you have written?
Oh dear, now you made me blush! Some of the titles include:
A Tale of Two Biscuit
Pride and Prejudice and Zombies (okay, I didn't write this one but I really wanted to!)
Fifty Shades of Gravy
Gone with the Windfall
The Great Gatsby has left the Building
Frankenstein or How to Make Your Own Monster
War and Peace and Potatoes
Alice in Wonderland and Other Stupid Titles I Have Not Written Yet
But my favourite book is definitely "The Cat in the Hat Comes Back." It speaks to my soul.
## What is the Fifty Shades of Gravy book about?
It's a steamy erotic novel featuring an array of delicious sauces, gravies, and their sexy adventures. Each chapter introduces a new character, from classic brown gravy to tangy tomato sauce, and explains how they met and fell in love. The plot thickens when grey poured cream discovers she's pregnant with 1000 island dressing! Will they stay together? Will the baby be able to walk on water? Read the book and find out!
<!-- original-model-card end -->
|
TheBloke/law-LLM-13B-GGUF | TheBloke | 2023-12-31T08:50:34Z | 459 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"legal",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:GAIR/lima",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:EleutherAI/pile",
"arxiv:2309.09530",
"base_model:AdaptLLM/law-LLM-13B",
"license:other",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-31T08:43:02Z | ---
base_model: AdaptLLM/law-LLM-13B
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
- EleutherAI/pile
inference: false
language:
- en
license: other
metrics:
- accuracy
model_creator: AdaptLLM
model_name: Law LLM 13B
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
tags:
- legal
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Law LLM 13B - GGUF
- Model creator: [AdaptLLM](https://huggingface.co/AdaptLLM)
- Original model: [Law LLM 13B](https://huggingface.co/AdaptLLM/law-LLM-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [AdaptLLM's Law LLM 13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/law-LLM-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/law-LLM-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/law-LLM-13B-GGUF)
* [AdaptLLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AdaptLLM/law-LLM-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [law-llm-13b.Q2_K.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [law-llm-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [law-llm-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [law-llm-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [law-llm-13b.Q4_0.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [law-llm-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [law-llm-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [law-llm-13b.Q5_0.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [law-llm-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [law-llm-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [law-llm-13b.Q6_K.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [law-llm-13b.Q8_0.gguf](https://huggingface.co/TheBloke/law-LLM-13B-GGUF/blob/main/law-llm-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/law-LLM-13B-GGUF and below it, a specific filename to download, such as: law-llm-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/law-LLM-13B-GGUF law-llm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/law-LLM-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/law-LLM-13B-GGUF law-llm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m law-llm-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./law-llm-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./law-llm-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: AdaptLLM's Law LLM 13B
# Adapt (Large) Language Models to Domains
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗
**************************** **Updates** ****************************
* 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
* 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
* 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B.
## Domain-Specific LLaMA-1
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
## Domain-Specific LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the law model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/law-chat")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/law-chat", use_fast=False)
# Put your input here:
user_input = '''Question: Which of the following is false about ex post facto laws?
Options:
- They make criminal an act that was innocent when committed.
- They prescribe greater punishment for an act than was prescribed when it was done.
- They increase the evidence required to convict a person than when the act was done.
- They alter criminal offenses or punishment in a substantially prejudicial manner for the purpose of punishing a person for some past activity.
Please provide your choice first and then provide explanations if possible.'''
# We use the prompt template of LLaMA-2-Chat demo
prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=4096)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
```
## Domain-Specific Tasks
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
## Citation
If you find our work helpful, please cite us:
```bibtex
@article{adaptllm,
title = {Adapting Large Language Models via Reading Comprehension},
author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
journal = {CoRR},
volume = {abs/2309.09530},
year = {2023}
}
```
<!-- original-model-card end -->
|
Intel/bge-small-en-v1.5-rag-int8-static | Intel | 2024-02-19T08:57:31Z | 459 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"license:mit",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| feature-extraction | 2024-01-02T07:53:33Z | ---
license: mit
language:
- en
---
# BGE-small-en-v1.5-rag-int8-static
A quantized version of [BAAI/BGE-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) quantized with [Intel® Neural Compressor](https://github.com/huggingface/optimum-intel) and compatible with [Optimum-Intel](https://github.com/huggingface/optimum-intel).
The model can be used with [Optimum-Intel](https://github.com/huggingface/optimum-intel) API and as a standalone model or as an embedder or ranker module as part of [fastRAG](https://github.com/IntelLabs/fastRAG) RAG pipeline.
## Technical details
Quantized using post-training static quantization.
| | |
|---|:---:|
| Calibration set | [qasper](https://huggingface.co/datasets/allenai/qasper) (with 50 random samples)" |
| Quantization tool | [Optimum-Intel](https://github.com/huggingface/optimum-intel) |
| Backend | `IPEX` |
| Original model | [BAAI/BGE-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) |
Instructions how to reproduce the quantized model can be found [here](https://github.com/IntelLabs/fastRAG/tree/main/scripts/optimizations/embedders).
## Evaluation - MTEB
Model performance on the [Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/spaces/mteb/leaderboard) *retrieval* and *reranking* tasks.
| | `INT8` | `FP32` | % diff |
|---|:---:|:---:|:---:|
| Reranking | 0.5826 | 0.5836 | -0.166% |
| Retrieval | 0.5138 | 0.5168 | -0.58% |
## Usage
### Using with Optimum-intel
See [Optimum-intel](https://github.com/huggingface/optimum-intel) installation page for instructions how to install. Or run:
``` sh
pip install -U optimum[neural-compressor, ipex] intel-extension-for-transformers
```
Loading a model:
``` python
from optimum.intel import IPEXModel
model = IPEXModel.from_pretrained("Intel/bge-small-en-v1.5-rag-int8-static")
```
Running inference:
``` python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Intel/bge-small-en-v1.5-rag-int8-static")
inputs = tokenizer(sentences, return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
# get the vector of [CLS]
embedded = model_output[0][:, 0]
```
### Using with a fastRAG RAG pipeline
Get started with installing [fastRAG](https://github.com/IntelLabs/fastRAG) as instructed [here](https://github.com/IntelLabs/fastRAG).
Below is an example for loading the model into a ranker node that embeds and re-ranks all the documents it gets in the node input of a pipeline.
``` python
from fastrag.rankers import QuantizedBiEncoderRanker
ranker = QuantizedBiEncoderRanker("Intel/bge-small-en-v1.5-rag-int8-static")
```
and plugging it into a pipeline
``` python
from haystack import Pipeline
p = Pipeline()
p.add_node(component=retriever, name="retriever", inputs=["Query"])
p.add_node(component=ranker, name="ranker", inputs=["retriever"])
```
See a more complete example notebook [here](https://github.com/IntelLabs/fastRAG/blob/main/examples/optimized-embeddings.ipynb).
|
ChrisWilson011016/5FLBJgCLwLAPwRWAEQPLtdo9kvdFaStYc7h6xnRRZ1mYXEa6_vgg | ChrisWilson011016 | 2024-03-04T19:04:37Z | 459 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-29T13:04:02Z | Entry not found |
neuralmagic/Llama-2-7b-ultrachat200k-pruned_70 | neuralmagic | 2024-05-15T16:11:47Z | 459 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"sparse",
"chat",
"conversational",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceH4/ultrachat_200k",
"arxiv:2405.03594",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1907.10641",
"arxiv:1911.01547",
"arxiv:2109.07958",
"arxiv:2110.14168",
"arxiv:2107.03374",
"arxiv:2310.06927",
"base_model:neuralmagic/Llama-2-7b-pruned70-retrained",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-15T15:45:30Z | ---
base_model: neuralmagic/Llama-2-7b-pruned70-retrained
inference: true
model_type: llama
pipeline_tag: text-generation
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
tags:
- sparse
- chat
---
# Llama-2-7b-pruned70-retrained-ultrachat
This repo contains a [70% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained) finetuned for chat tasks using the [UltraChat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
**Authors**: Neural Magic, Cerebras
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer).
### Running the model
This model may be run with the transformers library. For accelerated inference with sparsity, deploy with [nm-vllm](https://github.com/neuralmagic/nm-vllm) or [deepsparse](https://github.com/neuralmagic/deepsparse).
```python
# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-pruned70-retrained-ultrachat")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-pruned70-retrained-ultrachat", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer.apply_chat_template(input_text, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
## Evaluation Benchmark Results
Model evaluation metrics and results.
| Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned70-retrained-ultrachat |
|------------------------------------------------|---------------|-------------|-------------------------------|
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot | 46.1% | 32.5% |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 75.9% | 68.9% |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 72.6% | 65.1% |
| [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 52.8% | 45.3% |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | 44.8% | 39.6% |
| [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 12.4% | 4.8% |
| [AlpacaEval](https://arxiv.org/abs/2107.03374) ([Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) evaluator) | Win rate | 57.6% | 57.4% |
| [AlpacaEval](https://arxiv.org/abs/2107.03374) (GPT-4 Turbo evaluator) | Win rate | 60.6% | 54.0% |
## Model Training Details
This model was obtained by sparse-tranfer of the sparse foundational model [Llama-2-7b-pruned70-retrained](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained) on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
Training was performed for 2 epochs and used the [SquareHead](https://arxiv.org/abs/2310.06927) knowledge distillation with [Llama-2-7b-ultrachat](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat) as teacher.
## Help
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
yyupenn/whyxrayclip | yyupenn | 2024-06-07T13:44:09Z | 459 | 0 | open_clip | [
"open_clip",
"clip",
"zero-shot-image-classification",
"arxiv:2405.14839",
"arxiv:2003.13145",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2024-05-22T00:13:59Z | ---
license: mit
widget:
- src: >-
https://prod-images-static.radiopaedia.org/images/566180/d527ff6fc1482161c9225345c4ab42_big_gallery.jpg
candidate_labels: enlarged heart, pleural effusion
example_title: X-ray of cardiomegaly
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model Card for WhyXrayCLIP 🩻
# Table of Contents
1. [Model Details](#model-details)
2. [Get Started](#how-to-get-started-with-the-model)
3. [Uses](#uses)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Citation](#citation)
## Model Details
WhyXrayCLIP can align X-ray images with text descriptions. It is fine-tuned from [OpenCLIP (ViT-L/14)](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K) on [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.0.0/) with clinical reports processed by GPT-4. WhyXrayCLIP significantly outperforms PubMedCLIP, BioMedCLIP, etc. in zero-shot and linear probing on various chest X-ray datasets. (See results in [Evaluation](#evaluation)) While our CLIP models excel with careful data curation, training converges quickly, suggesting the current contrastive objective might not fully exploit the information from the data, potentially taking shortcuts, such as comparing images from different patients instead of focusing on diseases. Future research should explore more suitable objectives and larger-scale data collections to develop more robust medical foundation models.
- **Paper:** https://arxiv.org/pdf/2405.14839
- **Website:** https://yueyang1996.github.io/knobo/
- **Repository:** https://github.com/YueYANG1996/KnoBo
## How to Get Started with the Model
Use the code below to get started with the model.
```bash
pip install open_clip_torch
```
```python
import torch
from PIL import Image
import open_clip
model, _, preprocess = open_clip.create_model_and_transforms("hf-hub:yyupenn/whyxrayclip")
model.eval()
tokenizer = open_clip.get_tokenizer("ViT-L-14")
image = preprocess(Image.open("test_xray.jpg")).unsqueeze(0)
text = tokenizer(["enlarged heart", "pleural effusion"])
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
## Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot medical image (X-ray) classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models.
### Direct Use
WhyXrayCLIP can be used for zero-shot X-ray classification. You can use it to compute the similarity between an X-ray image and a text description.
### Downstream Use
WhyXrayCLIP can be used as a feature extractor for downstream tasks. You can use it to extract features from X-ray images and text descriptions for other downstream tasks.
### Out-of-Scope Use
WhyXrayCLIP should not be used for clinical diagnosis or treatment. It is not intended to be used for any clinical decision-making. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training Details
### Training Data
We utilize the [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.0.0/) dataset, specifically selecting only the PA and AP X-rays, which results in 243,334 images, each accompanied by a clinical report written by doctors. We preprocess these reports by extracting medically relevant findings, each described in a short and concise term. In total, we assemble 953K image-text pairs for training WhyXrayCLIP.
### Training Details
We utilize the training script from [OpenCLIP](https://github.com/mlfoundations/open_clip) and select [ViT-L/14](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K) as the backbone. Training is performed on 4 RTX A6000 GPUs for 10 epochs with a batch size of 128 and a learning rate of 1e−5. We choose checkpoints based on the lowest contrastive loss on validation sets.
## Evaluation
### Testing Data
We evaluate on 5 X-ray classification datasets: [Pneumonia](https://pubmed.ncbi.nlm.nih.gov/29474911/), [COVID-QU](https://arxiv.org/pdf/2003.13145), [NIH-CXR](https://www.kaggle.com/datasets/nih-chest-xrays/data), [Open-i](https://www.kaggle.com/datasets/raddar/chest-xrays-indiana-university), and [VinDr-CXR](https://vindr.ai/datasets/cxr). We report the zero-shot and linear probing accuracy on the above 5 datasets.
### Baselines
We compare various CLIP models, including [OpenAI-CLIP](https://huggingface.co/openai/clip-vit-large-patch14), [OpenCLIP](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K), [PubMedCLIP](https://huggingface.co/flaviagiammarino/pubmed-clip-vit-base-patch32), [BioMedCLIP](https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224), [PMC-CLIP](https://huggingface.co/ryanyip7777/pmc_vit_l_14) and [MedCLIP](https://github.com/RyanWangZf/MedCLIP). We evaluate these models in both zero-shot and linear probe scenarios. In zero-shot, GPT-4 generates prompts for each class, and we use the ensemble of cosine similarities between the image and prompts as the score for each class. In linear probing, we use the CLIP models as image encoders to extract features for logistic regression. Additionally, we include [DenseNet-121](https://github.com/mlmed/torchxrayvision) (fine-tuned on the pretraining datasets with cross-entropy loss) as a baseline for linear probing.
### Results
The figure below shows the averaged Zero-shot and Linear Probe performance of different models on five chest X-ray datasets.

## Citation
Please cite our paper if you use this model in your work:
```
@article{yang2024textbook,
title={A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis},
author={Yue Yang and Mona Gandhi and Yufei Wang and Yifan Wu and Michael S. Yao and Chris Callison-Burch and James C. Gee and Mark Yatskar},
journal={arXiv preprint arXiv:2405.14839},
year={2024}
}
``` |
RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf | RichardErkhov | 2024-05-27T18:35:54Z | 459 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T16:23:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Medusa-1.1-L2-7B - GGUF
- Model creator: https://huggingface.co/Sao10K/
- Original model: https://huggingface.co/Sao10K/Medusa-1.1-L2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Medusa-1.1-L2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q2_K.gguf) | Q2_K | 2.36GB |
| [Medusa-1.1-L2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Medusa-1.1-L2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Medusa-1.1-L2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Medusa-1.1-L2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Medusa-1.1-L2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q3_K.gguf) | Q3_K | 3.07GB |
| [Medusa-1.1-L2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Medusa-1.1-L2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Medusa-1.1-L2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Medusa-1.1-L2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Medusa-1.1-L2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Medusa-1.1-L2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Medusa-1.1-L2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q4_K.gguf) | Q4_K | 3.8GB |
| [Medusa-1.1-L2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Medusa-1.1-L2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Medusa-1.1-L2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Medusa-1.1-L2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Medusa-1.1-L2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q5_K.gguf) | Q5_K | 4.45GB |
| [Medusa-1.1-L2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Medusa-1.1-L2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Medusa-1.1-L2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q6_K.gguf) | Q6_K | 5.15GB |
| [Medusa-1.1-L2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Medusa-1.1-L2-7B-gguf/blob/main/Medusa-1.1-L2-7B.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: llama2
language:
- en
---
Experimental Ties-Merge between 5 Models and 2 LORAs at varying weights and densities.
<br> And trained with some dataset.
This is purely for my personal testing. Use if you want.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Medusa-1.1-L2-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 49.62 |
| ARC (25-shot) | 56.48 |
| HellaSwag (10-shot) | 78.57 |
| MMLU (5-shot) | 51.56 |
| TruthfulQA (0-shot) | 47.7 |
| Winogrande (5-shot) | 75.06 |
| GSM8K (5-shot) | 1.44 |
| DROP (3-shot) | 36.53 |
|
mradermacher/SOVLish-Devil-8B-L3-i1-GGUF | mradermacher | 2024-05-29T00:58:57Z | 459 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:saishf/SOVLish-Devil-8B-L3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T22:09:10Z | ---
base_model: saishf/SOVLish-Devil-8B-L3
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/saishf/SOVLish-Devil-8B-L3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOVLish-Devil-8B-L3-i1-GGUF/resolve/main/SOVLish-Devil-8B-L3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/airoboros-70b-3.3-GGUF | mradermacher | 2024-06-10T04:08:28Z | 459 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"en",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:jondurbin/airoboros-70b-3.3",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-06T17:50:32Z | ---
base_model: jondurbin/airoboros-70b-3.3
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- jondurbin/gutenberg-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
license_name: llama3
quantized_by: mradermacher
tags:
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jondurbin/airoboros-70b-3.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/airoboros-70b-3.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/airoboros-70b-3.3-GGUF/resolve/main/airoboros-70b-3.3.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/DonutHole-8x7B-i1-GGUF | mradermacher | 2024-06-12T23:18:46Z | 459 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ycros/DonutHole-8x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-12T14:55:48Z | ---
base_model: ycros/DonutHole-8x7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ycros/DonutHole-8x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DonutHole-8x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/DonutHole-8x7B-i1-GGUF/resolve/main/DonutHole-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
shubh1410/si_distilBert_intent | shubh1410 | 2024-06-20T06:34:36Z | 459 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-06-17T11:09:56Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert_intent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_intent
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2702 | 1.0 | 690 | 0.0219 | 0.9968 |
| 0.0172 | 2.0 | 1380 | 0.0084 | 0.9989 |
| 0.0045 | 3.0 | 2070 | 0.0044 | 0.9989 |
| 0.0021 | 4.0 | 2760 | 0.0033 | 0.9989 |
| 0.0015 | 5.0 | 3450 | 0.0025 | 0.9996 |
| 0.0007 | 6.0 | 4140 | 0.0011 | 0.9996 |
| 0.0004 | 7.0 | 4830 | 0.0008 | 0.9996 |
| 0.0002 | 8.0 | 5520 | 0.0006 | 1.0 |
| 0.0002 | 9.0 | 6210 | 0.0006 | 0.9996 |
| 0.0001 | 10.0 | 6900 | 0.0005 | 1.0 |
| 0.0001 | 11.0 | 7590 | 0.0004 | 1.0 |
| 0.0001 | 12.0 | 8280 | 0.0004 | 0.9996 |
| 0.0 | 13.0 | 8970 | 0.0006 | 0.9996 |
| 0.0 | 14.0 | 9660 | 0.0003 | 1.0 |
| 0.0 | 15.0 | 10350 | 0.0002 | 1.0 |
| 0.0 | 16.0 | 11040 | 0.0003 | 0.9996 |
| 0.0 | 17.0 | 11730 | 0.0003 | 0.9996 |
| 0.0 | 18.0 | 12420 | 0.0003 | 1.0 |
| 0.0 | 19.0 | 13110 | 0.0002 | 1.0 |
| 0.0 | 20.0 | 13800 | 0.0002 | 1.0 |
| 0.0 | 21.0 | 14490 | 0.0003 | 1.0 |
| 0.0 | 22.0 | 15180 | 0.0003 | 0.9996 |
| 0.0 | 23.0 | 15870 | 0.0002 | 1.0 |
| 0.0 | 24.0 | 16560 | 0.0004 | 0.9996 |
| 0.0 | 25.0 | 17250 | 0.0002 | 1.0 |
| 0.0 | 26.0 | 17940 | 0.0002 | 1.0 |
| 0.0 | 27.0 | 18630 | 0.0003 | 0.9996 |
| 0.0 | 28.0 | 19320 | 0.0001 | 1.0 |
| 0.0 | 29.0 | 20010 | 0.0002 | 1.0 |
| 0.0 | 30.0 | 20700 | 0.0002 | 1.0 |
| 0.0 | 31.0 | 21390 | 0.0002 | 1.0 |
| 0.0 | 32.0 | 22080 | 0.0001 | 1.0 |
| 0.0 | 33.0 | 22770 | 0.0001 | 1.0 |
| 0.0 | 34.0 | 23460 | 0.0001 | 1.0 |
| 0.0 | 35.0 | 24150 | 0.0001 | 1.0 |
| 0.0 | 36.0 | 24840 | 0.0001 | 1.0 |
| 0.0 | 37.0 | 25530 | 0.0001 | 1.0 |
| 0.0 | 38.0 | 26220 | 0.0001 | 1.0 |
| 0.0 | 39.0 | 26910 | 0.0001 | 1.0 |
| 0.0 | 40.0 | 27600 | 0.0001 | 1.0 |
| 0.0 | 41.0 | 28290 | 0.0001 | 1.0 |
| 0.0 | 42.0 | 28980 | 0.0001 | 1.0 |
| 0.0 | 43.0 | 29670 | 0.0001 | 1.0 |
| 0.0 | 44.0 | 30360 | 0.0001 | 1.0 |
| 0.0 | 45.0 | 31050 | 0.0001 | 1.0 |
| 0.0 | 46.0 | 31740 | 0.0001 | 1.0 |
| 0.0 | 47.0 | 32430 | 0.0001 | 1.0 |
| 0.0 | 48.0 | 33120 | 0.0001 | 1.0 |
| 0.0 | 49.0 | 33810 | 0.0001 | 1.0 |
| 0.0 | 50.0 | 34500 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.19.2
- Tokenizers 0.19.1
|
xX-FANE-Xx/mistral-7b-eufaq-merged-Q8_0-GGUF | xX-FANE-Xx | 2024-06-25T13:13:08Z | 459 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:xX-FANE-Xx/mistral-7b-eufaq-merged",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-25T13:12:37Z | ---
base_model: xX-FANE-Xx/mistral-7b-eufaq-merged
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# xX-FANE-Xx/mistral-7b-eufaq-merged-Q8_0-GGUF
This model was converted to GGUF format from [`xX-FANE-Xx/mistral-7b-eufaq-merged`](https://huggingface.co/xX-FANE-Xx/mistral-7b-eufaq-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/xX-FANE-Xx/mistral-7b-eufaq-merged) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo xX-FANE-Xx/mistral-7b-eufaq-merged-Q8_0-GGUF --hf-file mistral-7b-eufaq-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo xX-FANE-Xx/mistral-7b-eufaq-merged-Q8_0-GGUF --hf-file mistral-7b-eufaq-merged-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo xX-FANE-Xx/mistral-7b-eufaq-merged-Q8_0-GGUF --hf-file mistral-7b-eufaq-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo xX-FANE-Xx/mistral-7b-eufaq-merged-Q8_0-GGUF --hf-file mistral-7b-eufaq-merged-q8_0.gguf -c 2048
```
|
antoste/Magiq-3-Q2_K-GGUF | antoste | 2024-06-25T18:01:02Z | 459 | 0 | null | [
"gguf",
"italian",
"french",
"nlp",
"text-generation",
"moe",
"mixture of experts",
"llama-cpp",
"gguf-my-repo",
"en",
"it",
"fr",
"base_model:mymaia/Magiq-3",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-25T18:00:31Z | ---
base_model: mymaia/Magiq-3
language:
- en
- it
- fr
license: apache-2.0
tags:
- italian
- french
- nlp
- text-generation
- moe
- mixture of experts
- llama-cpp
- gguf-my-repo
---
# antoste/Magiq-3-Q2_K-GGUF
This model was converted to GGUF format from [`mymaia/Magiq-3`](https://huggingface.co/mymaia/Magiq-3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mymaia/Magiq-3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo antoste/Magiq-3-Q2_K-GGUF --hf-file magiq-3-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo antoste/Magiq-3-Q2_K-GGUF --hf-file magiq-3-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo antoste/Magiq-3-Q2_K-GGUF --hf-file magiq-3-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo antoste/Magiq-3-Q2_K-GGUF --hf-file magiq-3-q2_k.gguf -c 2048
```
|
NikolayKozloff/RoLlama3-8b-Instruct-Q5_0-GGUF | NikolayKozloff | 2024-06-30T20:25:29Z | 459 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"ro",
"base_model:OpenLLM-Ro/RoLlama3-8b-Instruct",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-06-30T12:06:17Z | ---
base_model: OpenLLM-Ro/RoLlama3-8b-Instruct
language:
- ro
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/RoLlama3-8b-Instruct-Q5_0-GGUF
This model was converted to GGUF format from [`OpenLLM-Ro/RoLlama3-8b-Instruct`](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q5_0-GGUF --hf-file rollama3-8b-instruct-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q5_0-GGUF --hf-file rollama3-8b-instruct-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q5_0-GGUF --hf-file rollama3-8b-instruct-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q5_0-GGUF --hf-file rollama3-8b-instruct-q5_0.gguf -c 2048
``` |
pucpr/gpt2-bio-pt | pucpr | 2022-11-23T12:33:37Z | 458 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "O paciente recebeu "
- text: "A cardiologia provou que "
- text: "O paciente chegou no hospital "
- text: "Cientistas descobriram que "
- text: "O nível de atividade biológica "
- text: "O DNA e o RNA "
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/gpt2-bio-pt/main/img/logo-gpt2-bio-pt.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/gpt2-bio-pt/main/img/logo-gpt2-bio-pt.png" alt="Logo GPt2-Bio-Pt">
# GPT2-BioPT - a Language Model for Portuguese Biomedical text generation
## Introduction
GPT2-BioPT (Portuguese Biomedical GPT-2 small) is a language model for Portuguese based on the OpenAI GPT-2 model, trained from the [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese/) with biomedical literature.
We used **Transfer Learning and Fine-tuning techniques** with 110MB of training data, corresponding to 16,209,373 tokens and 729,654 sentences.
## GPT-2
*Note: information copied/pasted from [Model: gpt2 >> GPT-2](https://huggingface.co/gpt2#gpt-2)*
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at this [page](https://openai.com/blog/better-language-models/) (February 14, 2019).
Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
*Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)*
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
## How to use GPT2-BioPT with HuggingFace
```
from transformers import pipeline
chef = pipeline('text-generation',model="pucpr/gpt2-bio-pt", tokenizer="pucpr/gpt2-bio-pt",config={'max_length':800})
result = chef('O paciente chegou no hospital')[0]['generated_text']
print(result)
```
Resultado:
*```O paciente chegou no hospital três meses após a operação, não houve complicações graves. Entre os grupos que apresentaram maior número de lesões, o exame da cavidade pélvica estava significantemente associado à ausência de complicações. Foi encontrada uma maior incidência de fraturas (...)```*
## Citation
```
@INPROCEEDINGS{9474713,
author={Schneider, Elisa Terumi Rubel and de Souza, João Vitor Andrioli and Gumiel, Yohan Bonescki and Moro, Claudia and Paraiso, Emerson Cabrera},
booktitle={2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS)},
title={A GPT-2 Language Model for Biomedical Texts in Portuguese},
year={2021},
volume={},
number={},
pages={474-479},
doi={10.1109/CBMS52027.2021.00056}}
```
## Questions?
Post a Github issue on the [GPT2-Bio-Pt repo](https://github.com/HAILab-PUCPR/gpt2-bio-pt/). |
efederici/mmarco-sentence-BERTino | efederici | 2022-06-14T08:36:11Z | 458 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"dataset:unicamp-dl/mmarco",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2022-06-01T18:20:17Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- unicamp-dl/mmarco
---
# mmarco-sentence-BERTino
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It was trained on [mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco/viewer/italian/train).
<p align="center">
<img src="https://media.tate.org.uk/art/images/work/L/L04/L04294_9.jpg" width="600"> </br>
Mohan Samant, Midnight Fishing Party, 1978
</p>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/mmarco-sentence-BERTino')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/mmarco-sentence-BERTino')
model = AutoModel.from_pretrained('efederici/mmarco-sentence-BERTino')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
``` |
timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384 | timm | 2024-02-10T23:38:07Z | 458 | 2 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:laion-2b",
"arxiv:2210.08402",
"arxiv:2201.03545",
"arxiv:2103.00020",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-31T22:29:09Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- laion-2b
---
# Model card for convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384
A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-12k followed by ImageNet-1k in `timm` bby Ross Wightman.
Please see related OpenCLIP model cards for more details on pretrain:
* https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup
* https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 200.1
- GMACs: 101.1
- Activations (M): 126.7
- Image size: 384 x 384
- **Papers:**
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- **Original:** https://github.com/mlfoundations/open_clip
- **Pretrain Dataset:** LAION-2B
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 192, 96, 96])
# torch.Size([1, 384, 48, 48])
# torch.Size([1, 768, 24, 24])
# torch.Size([1, 1536, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
sail-rvc/Eminem_e600_s5400 | sail-rvc | 2023-07-14T07:22:11Z | 458 | 1 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-07-14T07:21:50Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Eminem_e600_s5400
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:22:11
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
lorahub/flan_t5_large-duorc_SelfRC_movie_director | lorahub | 2023-07-24T10:26:21Z | 458 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:26:11Z | ---
library_name: peft
---
|
Justin-Choo/XXMix_9realisticSDXL | Justin-Choo | 2023-08-29T04:03:18Z | 458 | 6 | diffusers | [
"diffusers",
"safetensors",
"art",
"text-to-image",
"en",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-28T10:19:40Z | ---
language:
- en
library_name: diffusers
tags:
- art
pipeline_tag: text-to-image
---
GET MORE INFO AT https://civitai.com/models/124421/xxmix9realisticsdxl
trigger word(putting this word in your prompt will get better results): xxmix_girl |
porkorbeef/Llama-2-13b-0904 | porkorbeef | 2023-09-04T01:31:52Z | 458 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| feature-extraction | 2023-09-04T01:03:30Z | Entry not found |
immich-app/nllb-clip-large-siglip__v1 | immich-app | 2023-12-11T05:51:06Z | 458 | 0 | transformers | [
"transformers",
"onnx",
"immich",
"clip",
"multilingual",
"endpoints_compatible",
"region:us"
]
| null | 2023-12-11T03:39:28Z | ---
tags:
- immich
- clip
- multilingual
---
# Model Description
This repo contains ONNX exports for the corresponding ViT-based multilingual CLIP model by OpenCLIP. See the [OpenCLIP](https://github.com/mlfoundations/open_clip) repo for more info.
Visual and textual encoders are separated into separate models for the purpose of generating image and text embeddings.
This repo is specifically intended for use with [Immich](https://immich.app/), a self-hosted photo library.
|
OpenLLMAI/Llama-2-13b-sft-model-ocra-500k | OpenLLMAI | 2024-01-05T05:39:46Z | 458 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-05T03:17:17Z | Entry not found |
frankjoshua/sdxlUnstableDiffusers_v11 | frankjoshua | 2024-01-17T00:08:03Z | 458 | 3 | diffusers | [
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-01-16T21:23:40Z | Entry not found |
ares1123/virtual-dress-try-on | ares1123 | 2024-01-18T14:54:08Z | 458 | 10 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-01-18T10:07:31Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### This VTO trained by Hideki Okamura with stable-diffusion/DreamBooth/feature extraction (EfficientNetB3 CNN model)/OpenPose for estimating person keypoints/
### stable diffusion and vector embeddings concepts used to build a virtual try-on system that provides a realistic and visually appealing virtual try-on experience for users
## Hardware and software Requirements : GPU A100 , High RAM , pytorch ,stable-diffusion-v1-5 , python 3.0 , U-Net Architecture , Dreambooth , OpenPose , EfficienNetB3 pre-trained CNN model
The VTO model is hosted on the Hugging Face Model Hub.
(https://huggingface.co/ares1123/virtual-dress-try-on)
This model leverages a combination of advanced deep learning techniques and architectures, including stable-diffusion, DreamBooth, feature extraction using the EfficientNetB3 CNN model, and OpenPose for estimating person keypoints. These techniques are harmoniously integrated to provide a realistic and visually appealing virtual try-on experience for users.
The VTO model is built on the principles of stable diffusion and vector embeddings, which are critical in creating a high-quality virtual try-on system. The model is trained using the DreamBooth model, which is a stable-diffusion model, and the feature extraction is performed using the EfficientNetB3 CNN model. OpenPose, a real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints, is used for estimating person keypoints.
The model requires specific hardware and software for optimal performance. The hardware requirements include a GPU A100 and high RAM. The software requirements include PyTorch, stable-diffusion-v1-5, Python 3.0, U-Net Architecture, Dreambooth, OpenPose, and the EfficientNetB3 pre-trained CNN model.
The VTO model is a testament to the potential of deep learning in the fashion retail industry. It showcases how advanced machine learning techniques can be used to enhance the online shopping experience, making it more interactive and personalized. This model serves as a valuable resource for researchers and practitioners in the field, providing a practical example of a high-quality virtual try-on system.
The model also provides a foundation for future research and development in the field of virtual try-on systems. It highlights the potential of deep learning techniques in addressing the challenges associated with virtual try-on systems, such as the accuracy of virtual representations and the scalability of the system. By leveraging advanced deep learning techniques, the VTO model paves the way for the development of more sophisticated and effective virtual try-on systems in the future.
Sample pictures of this concept:
















































































































































































































































































|
owner203/japanese-llama-2-7b-gguf | owner203 | 2024-06-05T02:30:01Z | 458 | 0 | null | [
"gguf",
"ja",
"license:llama2",
"region:us"
]
| null | 2024-01-22T03:00:02Z | ---
license: llama2
language:
- ja
---
# Japanese-LLaMA-2-7B-GGUF
Japanese-LLaMA-2-7B-GGUFはJapanese-LLaMA-2-7Bの**GGUF形式**です。
モデルURL:https://huggingface.co/owner203/japanese-llama-2-7b
|
rityakh/realitium-finetune | rityakh | 2024-02-08T04:56:21Z | 458 | 0 | diffusers | [
"diffusers",
"text-to-image",
"region:us"
]
| text-to-image | 2024-02-05T01:39:15Z | ---
library_name: diffusers
pipeline_tag: text-to-image
---
# Realitium finetune models
### Here you can find only pure, trained models, without mixing. |
Crataco/TinyDolphin-2.8.2-1.1B-laser-GGUF | Crataco | 2024-02-11T19:25:47Z | 458 | 0 | null | [
"gguf",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:teknium/openhermes",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-05T17:48:40Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- teknium/openhermes
language:
- en
---
llama.cpp used for quantization: [commit a6e514a](https://github.com/ggerganov/llama.cpp/tree/a6e514a85f0fda38ff78ec91782877ea3d19ed98)
Was testing this model out and found it pretty decent for a 1.1B model. Smaller models are still stupid but can work as a basic conversational partner on low-end hardware.
Was inspired after hearing about it on r/LocalLLaMA and finding out the only other quants of this model are Q4_K_M and Q8_0.
I also tried converting it to IQ2_XXS, IQ2_XS, and Q2_K_S, but none of them worked because I need importance matrix.
Original model card below.
***
# TinyDolphin-2.8.2-1.1b-laser

Join Our Discord! https://discord.gg/cognitivecomputations
This is an version 3 of a model trained on 3 3090's by Kearm on the new Dolphin 2.8 dataset by Eric Hartford https://erichartford.com/dolphin 🐬
This model uses our laser technique from https://github.com/cognitivecomputations/laserRMT to denoise the model!
For this version we increased the epochs as well as refined the datasets used.
## Example Outputs
TBD
Support my efforts! https://ko-fi.com/kearm
# Orignal Model Card Below
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| TinyLlama-1.1B-intermediate-step-1195k-2.5T | 2.5T | 58.96 | 34.40 | 58.72 | 31.91 | 56.78 | 63.21 | 73.07 | 53.86|
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99| |
google/gemma-2b-it-sfp-cpp | google | 2024-06-27T14:09:43Z | 458 | 2 | gemma.cpp | [
"gemma.cpp",
"text-generation",
"license:gemma",
"region:us"
]
| text-generation | 2024-02-27T10:52:17Z | ---
library_name: gemma.cpp
license: gemma
pipeline_tag: text-generation
tags: []
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B instruct version of the Gemma model for usage with C++ (https://github.com/google/gemma.cpp). This is a compressed version of the weights, which will load, run, and download more quickly. For more information about the model, visit https://huggingface.co/google/gemma-2b-it.
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2b-it-sfp-cpp)
**Authors**: Google |
MTSAIR/MultiVerse_70B_AWQ | MTSAIR | 2024-05-06T11:24:03Z | 458 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-04-03T15:19:17Z | ---
license: other
license_name: mts-ai
license_link: LICENSE
---
**Note:**
Our multiverse training method is not related to the multiverse paper, it is a new technique that we will hopefully publish soon
Quantized int4 version of [MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B) with AWQ quantization method
I, a learning bot, have been enhanced through a groundbreaking training method. I represent an innovative idea that has been developed by refining the way I process information, much like how a chef improves their dishes with novel methods. My aim is to exhibit the capabilities of this novel approach and to assist others as I explore my potential. Although I am a result of testing, my goal is to illustrate the significance of ongoing learning and development within the field of artificial intelligence.' |
bartowski/CodeQwen1.5-7B-Chat-GGUF | bartowski | 2024-04-17T11:42:50Z | 458 | 7 | null | [
"gguf",
"chat",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-17T03:28:34Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
quantized_by: bartowski
---
## Llamacpp Quantizations of CodeQwen1.5-7B-Chat
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> PR <a href="https://github.com/ggerganov/llama.cpp/pull/6707">6707</a> for quantization.
Original model: https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CodeQwen1.5-7B-Chat-Q8_0.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q8_0.gguf) | Q8_0 | 7.70GB | Extremely high quality, generally unneeded but max available quant. |
| [CodeQwen1.5-7B-Chat-Q6_K.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q6_K.gguf) | Q6_K | 6.37GB | Very high quality, near perfect, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q5_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q5_K_M.gguf) | Q5_K_M | 5.42GB | High quality, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q5_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q5_K_S.gguf) | Q5_K_S | 5.14GB | High quality, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q4_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q4_K_M.gguf) | Q4_K_M | 4.73GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q4_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q4_K_S.gguf) | Q4_K_S | 4.41GB | Slightly lower quality with more space savings, *recommended*. |
| [CodeQwen1.5-7B-Chat-IQ4_NL.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ4_NL.gguf) | IQ4_NL | 4.18GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [CodeQwen1.5-7B-Chat-IQ4_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ4_XS.gguf) | IQ4_XS | 4.03GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q3_K_L.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q3_K_L.gguf) | Q3_K_L | 3.98GB | Lower quality but usable, good for low RAM availability. |
| [CodeQwen1.5-7B-Chat-Q3_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q3_K_M.gguf) | Q3_K_M | 3.80GB | Even lower quality. |
| [CodeQwen1.5-7B-Chat-IQ3_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_M.gguf) | IQ3_M | 3.60GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [CodeQwen1.5-7B-Chat-IQ3_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_S.gguf) | IQ3_S | 3.50GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [CodeQwen1.5-7B-Chat-Q3_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q3_K_S.gguf) | Q3_K_S | 3.50GB | Low quality, not recommended. |
| [CodeQwen1.5-7B-Chat-IQ3_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_XS.gguf) | IQ3_XS | 3.35GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [CodeQwen1.5-7B-Chat-IQ3_XXS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_XXS.gguf) | IQ3_XXS | 3.22GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [CodeQwen1.5-7B-Chat-Q2_K.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q2_K.gguf) | Q2_K | 3.05GB | Very low quality but surprisingly usable. |
| [CodeQwen1.5-7B-Chat-IQ2_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_M.gguf) | IQ2_M | 3.00GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [CodeQwen1.5-7B-Chat-IQ2_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_S.gguf) | IQ2_S | 2.87GB | Very low quality, uses SOTA techniques to be usable. |
| [CodeQwen1.5-7B-Chat-IQ2_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_XS.gguf) | IQ2_XS | 2.76GB | Very low quality, uses SOTA techniques to be usable. |
| [CodeQwen1.5-7B-Chat-IQ2_XXS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_XXS.gguf) | IQ2_XXS | 2.61GB | Lower quality, uses SOTA techniques to be usable. |
| [CodeQwen1.5-7B-Chat-IQ1_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ1_M.gguf) | IQ1_M | 2.45GB | Extremely low quality, *not* recommended. |
| [CodeQwen1.5-7B-Chat-IQ1_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ1_S.gguf) | IQ1_S | 2.36GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
PrunaAI/Qwen1.5-MoE-A2.7B-Chat-GGUF-smashed | PrunaAI | 2024-05-04T15:54:20Z | 458 | 1 | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-05-04T08:39:48Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the Qwen/Qwen1.5-MoE-A2.7B-Chat model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Qwen1.5-MoE-A2.7B-Chat-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Qwen1.5-MoE-A2.7B-Chat-GGUF-smashed Qwen1.5-MoE-A2.7B-Chat.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Qwen1.5-MoE-A2.7B-Chat-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Qwen1.5-MoE-A2.7B-Chat-GGUF-smashed Qwen1.5-MoE-A2.7B-Chat.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Qwen1.5-MoE-A2.7B-Chat.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Qwen1.5-MoE-A2.7B-Chat.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Qwen1.5-MoE-A2.7B-Chat.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP | Fischerboot | 2024-05-08T15:13:12Z | 458 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:Fischerboot/Llama3-Aura-3some-SLERP",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-07T20:53:26Z | ---
base_model:
- Orenguteng/Llama-3-8B-Lexi-Uncensored
- Fischerboot/Llama3-Aura-3some-SLERP
library_name: transformers
tags:
- mergekit
- merge
license: llama3
language:
- en
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored)
* [Fischerboot/Llama3-Aura-3some-SLERP](https://huggingface.co/Fischerboot/Llama3-Aura-3some-SLERP)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Fischerboot/Llama3-Aura-3some-SLERP
layer_range:
- 0
- 32
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored
layer_range:
- 0
- 32
merge_method: slerp
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
``` |
QuantFactory/dolphin-2.9.1-yi-1.5-9b-GGUF | QuantFactory | 2024-05-26T04:20:59Z | 458 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"text-generation",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:cognitivecomputations/dolphin-2.9.1-yi-1.5-9b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-26T03:00:55Z | ---
license: apache-2.0
base_model: cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
tags:
- generated_from_trainer
- axolotl
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
library_name: transformers
pipeline_tag: text-generation
---
# Dolphin 2.9.1 Yi 1.5 9b 🐬-GGUF
This is quantized version of [cognitivecomputations/dolphin-2.9.1-yi-1.5-9b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-yi-1.5-9b) created using llama.cpp
# Model Description
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
This is our most spectacular outcome ever. FFT, all parameters, 16bit. 70.9 MMLU on 9b! And it talks like a dream.
Although the max positional embeddings is 4k, we used rope theta of 1000000.0 and we trained with sequence length 12k. We plan to train on the upcoming 32k version as well.
[](https://discord.gg/cognitivecomputations)
Our appreciation for the sponsors of Dolphin 2.9.1:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
- [OnDemand](https://on-demand.io/) - provided inference sponsorship
This model is based on Yi-1.5-9b, and is governed by apache 2.0 license.
The base model has 4k context, but we used rope theta of 1000000.0 and the full-weight fine-tuning was with 12k sequence length.
Dolphin 2.9.1 uses ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to apache 2.0 license. We grant permission for any use, including commercial. Dolphin was trained on data generated from GPT4, among other models.
## Evals

## Training
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: 01-ai/Yi-1.5-9B
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
trust_remote_code: true
# load_in_8bit: false
# load_in_4bit: true
# strict: false
# adapter: qlora
# lora_modules_to_save: [embed_tokens, lm_head]
# lora_r: 32
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_linear: True
# lora_fan_in_fan_out:
datasets:
- path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
chat_template: chatml
dataset_prepared_path: yi34b
val_set_size: 0.03
output_dir: ./out-yi
sequence_len: 12000
sample_packing: true
pad_to_sequence_len: true
wandb_project: dolphin-2.9-yi-34b
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
# resume_from_checkpoint: /workspace/axolotl/dbrx-checkpoint
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 4
save_total_limit: 2
save_steps:
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|startoftext|>"
eos_token: "<|im_end|>"
pad_token: "<unk>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
```
</details><br>
# out-yi
This model is a fine-tuned version of [01-ai/Yi-1.5-9B](https://huggingface.co/01-ai/Yi-1.5-9B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6332 | 0.0024 | 1 | 0.6469 |
| 0.4811 | 0.2499 | 106 | 0.4739 |
| 0.4465 | 0.4997 | 212 | 0.4547 |
| 0.4472 | 0.7496 | 318 | 0.4480 |
| 0.4373 | 0.9994 | 424 | 0.4429 |
| 0.4147 | 1.2384 | 530 | 0.4432 |
| 0.3879 | 1.4882 | 636 | 0.4400 |
| 0.3872 | 1.7381 | 742 | 0.4371 |
| 0.4044 | 1.9879 | 848 | 0.4344 |
| 0.3509 | 2.2269 | 954 | 0.4410 |
| 0.3628 | 2.4767 | 1060 | 0.4401 |
| 0.3652 | 2.7266 | 1166 | 0.4397 |
| 0.3674 | 2.9764 | 1272 | 0.4396 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.2+cu121
- Datasets 2.15.0
- Tokenizers 0.19.1 |
MY555/tinyllama-Surya-v1 | MY555 | 2024-06-04T09:37:02Z | 458 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-30T13:38:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
briaai/DEV-ControlNetInpaintingFast | briaai | 2024-06-13T18:27:50Z | 458 | 0 | diffusers | [
"diffusers",
"text-to-image",
"legal liability",
"commercial use",
"license:other",
"region:us"
]
| text-to-image | 2024-06-13T12:06:28Z |
---
license_name: bria-2.3
license: other
license_link: https://bria.ai/bria-huggingface-model-license-agreement/
library_name: diffusers
inference: false
tags:
- text-to-image
- legal liability
- commercial use
extra_gated_description: Model weights from BRIA AI can be obtained with the purchase of a commercial license. Fill in the form below and we reach out to you.
extra_gated_heading: "Fill in this form to request a commercial license for the model"
extra_gated_fields:
Name: text
Company/Org name: text
Org Type (Early/Growth Startup, Enterprise, Academy): text
Role: text
Country: text
Email: text
By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: checkbox
---
# BRIA 2.3 ControlNet Inpainting Fast
Trained exclusively on the largest multi-source commercial-grade licensed dataset, BRIA 2.3 inpainting guarantees best quality while safe for commercial use. The model provides full legal liability coverage for copyright and privacy infrigement and harmful content mitigation, as our dataset does not represent copyrighted materials, such as fictional characters, logos or trademarks, public figures, harmful content or privacy infringing content.
BRIA 2.3 is an inpainting model designed to fill masked regions in images based on user-provided textual prompts. The model can be applied in different scenarios, including object removal, replacement, addition, and modification within an image, while also possessing the capability to expand the image.
# What's New
BRIA 2.3 ControlNet Inpainting can be applied on top of BRIA 2.3 Text-to-Image and therefore enable to use [Fast-LORA](https://huggingface.co/briaai/BRIA-2.3-FAST-LORA). This results in extremely fast inpainting model, requires only 1.6s using A10 GPU.
### Model Description
- **Developed by:** BRIA AI
- **Model type:** Latent diffusion image-to-image model
- **License:** [bria-2.3 inpainting Licensing terms & conditions](https://bria.ai/bria-huggingface-model-license-agreement/).
- Purchase is required to license and access the model.
- **Model Description:** BRIA 2.3 inpainting was trained exclusively on a professional-grade, licensed dataset. It is designed for commercial use and includes full legal liability coverage.
- **Resources for more information:** [BRIA AI](https://bria.ai/)
### Get Access to the source code and pre-trained model
Interested in BRIA 2.3 inpainting? Our Model is available for purchase.
**Purchasing access to BRIA 2.3 inpainting ensures royalty management and full liability for commercial use.**
*Are you a startup or a student?* We encourage you to apply for our specialized Academia and [Startup Programs](https://pages.bria.ai/the-visual-generative-ai-platform-for-builders-startups-plan?_gl=1*cqrl81*_ga*MTIxMDI2NzI5OC4xNjk5NTQ3MDAz*_ga_WRN60H46X4*MTcwOTM5OTMzNC4yNzguMC4xNzA5Mzk5MzM0LjYwLjAuMA..) to gain access. These programs are designed to support emerging businesses and academic pursuits with our cutting-edge technology.
**Contact us today to unlock the potential of BRIA 2.3 inpainting!**
By submitting the form above, you agree to BRIA’s [Privacy policy](https://bria.ai/privacy-policy/) and [Terms & conditions](https://bria.ai/terms-and-conditions/).
### How To Use
```python
from diffusers import (
AutoencoderKL,
LCMScheduler,
)
from pipeline_controlnet_sd_xl import StableDiffusionXLControlNetPipeline
from controlnet import ControlNetModel, ControlNetConditioningEmbedding
import torch
import numpy as np
from PIL import Image
import requests
import PIL
from io import BytesIO
from torchvision import transforms
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
def get_masked_image(image, image_mask, width, height):
image_mask = image_mask # inpaint area is white
image_mask = image_mask.resize((width, height)) # object to remove is white (1)
image_mask_pil = image_mask
image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
image_mask = np.array(image_mask_pil.convert("L")).astype(np.float32) / 255.0
assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size"
masked_image_to_present = image.copy()
masked_image_to_present[image_mask > 0.5] = (0.5,0.5,0.5) # set as masked pixel
image[image_mask > 0.5] = 0.5 # set as masked pixel - s.t. will be grey
image = Image.fromarray((image * 255.0).astype(np.uint8))
masked_image_to_present = Image.fromarray((masked_image_to_present * 255.0).astype(np.uint8))
return image, image_mask_pil, masked_image_to_present
image_transforms = transforms.Compose(
[
transforms.ToTensor(),
]
)
default_negative_prompt = "Logo,Watermark,Text,Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate,Mutilated,Mutilated hands,Poorly drawn face,Deformed,Bad anatomy,Cloned face,Malformed limbs,Missing legs,Too many fingers"
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((1024, 1024))
mask_image = download_image(mask_url).resize((1024, 1024))
mask_image = mask_image.convert("L")
width, height = init_image.size
# Load, init model
controlnet = ControlNetModel().from_config('briaai/DEV-ControlNetInpaintingFast', torch_dtype=torch.float16)
controlnet.controlnet_cond_embedding = ControlNetConditioningEmbedding(
conditioning_embedding_channels=320,
conditioning_channels = 5
)
controlnet = ControlNetModel().from_pretrained("briaai/DEV-ControlNetInpaintingFast", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained("briaai/BRIA-2.3", controlnet=controlnet.to(dtype=torch.float16), torch_dtype=torch.float16, vae=vae) #force_zeros_for_empty_prompt=False, # vae=vae)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("briaai/BRIA-2.3-FAST-LORA")
pipe.fuse_lora()
pipe = pipe.to('cuda:0')
pipe.enable_xformers_memory_efficient_attention()
generator = torch.Generator(device='cuda:0').manual_seed(123456)
vae = pipe.vae
masked_image, image_mask, masked_image_to_present = get_masked_image(init_image, mask_image, width, height)
masked_image_tensor = image_transforms(masked_image)
masked_image_tensor = (masked_image_tensor - 0.5) / 0.5
masked_image_tensor = masked_image_tensor.unsqueeze(0).to(device="cuda")
control_latents = vae.encode(
masked_image_tensor[:, :3, :, :].to(vae.dtype)
).latent_dist.sample()
control_latents = control_latents * vae.config.scaling_factor
image_mask = np.array(image_mask)[:,:]
mask_tensor = torch.tensor(image_mask, dtype=torch.float32)[None, ...]
# binarize the mask
mask_tensor = torch.where(mask_tensor > 128.0, 255.0, 0)
mask_tensor = mask_tensor / 255.0
mask_tensor = mask_tensor.to(device="cuda")
mask_resized = torch.nn.functional.interpolate(mask_tensor[None, ...], size=(control_latents.shape[2], control_latents.shape[3]), mode='nearest')
# mask_resized = mask_resized.to(torch.float16)
masked_image = torch.cat([control_latents, mask_resized], dim=1)
prompt = "A park bench"
gen_img = pipe(negative_prompt=default_negative_prompt, prompt=prompt,
controlnet_conditioning_sale=1.0,
num_inference_steps=12,
height=height, width=width,
image = masked_image, # control image
init_image = init_image,
mask_image = mask_tensor,
guidance_scale = 1.2,
generator=generator).images[0]
```
|
Ammartatox/Phired-Q4_K_M-GGUF | Ammartatox | 2024-06-29T17:12:00Z | 458 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Ammartatox/Phired",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T17:11:49Z | ---
base_model: Ammartatox/Phired
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- llama-cpp
- gguf-my-repo
---
# Ammartatox/Phired-Q4_K_M-GGUF
This model was converted to GGUF format from [`Ammartatox/Phired`](https://huggingface.co/Ammartatox/Phired) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Ammartatox/Phired) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ammartatox/Phired-Q4_K_M-GGUF --hf-file phired-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ammartatox/Phired-Q4_K_M-GGUF --hf-file phired-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ammartatox/Phired-Q4_K_M-GGUF --hf-file phired-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ammartatox/Phired-Q4_K_M-GGUF --hf-file phired-q4_k_m.gguf -c 2048
```
|
oleg-go/mistral-7b-GGUF-Q4K | oleg-go | 2023-10-27T22:30:17Z | 457 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-27T21:30:22Z | Entry not found |
ishorn5/RTLCoder-v1.1 | ishorn5 | 2024-02-25T13:37:24Z | 457 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"arxiv:2312.08617",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-19T08:05:05Z | Paper: https://arxiv.org/pdf/2312.08617.pdf |
extraltodeus/llamachill_13b | extraltodeus | 2024-02-23T19:13:27Z | 457 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"fr",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-02-23T17:47:01Z | ---
license: llama2
language:
- en
- fr
---
This model is the selection of the values having the smallest sum of euclidean distances in between:
- TheBloke_WizardLM-1.0-Uncensored-Llama2-13B
- TheBloke_Spicyboros-13B-2.2-GPTQ
- TheBloke_MythoMax-L2-13B-GPTQ
- TheBloke_Llama-2-13B-GPTQ
Overall it seems to be the most resilient to "bad settings" that I have tried so far.
It is surprisingly efficient at french too. |
hendrydong/Mistral-RM-for-RAFT-GSHF-v0 | hendrydong | 2024-03-23T07:44:51Z | 457 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2024-03-22T12:23:36Z | # Training
The base model is `mistralai/Mistral-7B-Instruct-v0.2`.
We also merge the training script at https://github.com/WeiXiongUST/RLHF-Reward-Modeling.
Thanks Wei (https://huggingface.co/weqweasdas) for his help and contribution to the community.
# Usage
To use this model, you need to load by `AutoModelForSequenceClassification`,
```python
model = AutoModelForSequenceClassification.from_pretrained(
"hendrydong/Mistral-RM-for-RAFT-GSHF-v0", num_labels=1, torch_dtype=torch.bfloat16
)
```
and prepare dataset like
```python
SAMPLE =[
{'role': 'user', 'content': 'Hi!'},
{'role': 'assistant', 'content': 'How are you?'},
]
```
The template is the same as `mistralai/Mistral-7B-Instruct-v0.2`.
The reward model can be used for iterative SFT/DPO.
Please cite them if you found this RM helpful,
```
@article{dong2023raft,
title={Raft: Reward ranked finetuning for generative foundation model alignment},
author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong},
journal={arXiv preprint arXiv:2304.06767},
year={2023}
}
@article{xiong2023gibbs,
title={Gibbs sampling from human feedback: A provable kl-constrained framework for rlhf},
author={Xiong, Wei and Dong, Hanze and Ye, Chenlu and Zhong, Han and Jiang, Nan and Zhang, Tong},
journal={arXiv preprint arXiv:2312.11456},
year={2023}
}
``` |
mradermacher/calm2-7b-chat-wataoshi-GGUF | mradermacher | 2024-05-06T06:00:39Z | 457 | 0 | transformers | [
"transformers",
"gguf",
"ja",
"base_model:aipracticecafe/calm2-7b-chat-wataoshi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-24T12:59:48Z | ---
base_model: aipracticecafe/calm2-7b-chat-wataoshi
language:
- ja
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/aipracticecafe/calm2-7b-chat-wataoshi
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.IQ3_S.gguf) | IQ3_S | 3.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q3_K_S.gguf) | Q3_K_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.IQ4_NL.gguf) | IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q4_K_S.gguf) | Q4_K_S | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/calm2-7b-chat-wataoshi-GGUF/resolve/main/calm2-7b-chat-wataoshi.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Nyakura-CausalLM-RP-34B-GGUF | mradermacher | 2024-05-28T04:03:21Z | 457 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ParasiticRogue/Nyakura-CausalLM-RP-34B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-05T04:30:24Z | ---
base_model: ParasiticRogue/Nyakura-CausalLM-RP-34B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-34b
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ParasiticRogue/Nyakura-CausalLM-RP-34B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nyakura-CausalLM-RP-34B-GGUF/resolve/main/Nyakura-CausalLM-RP-34B.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
glonor/byt5-arabic-diacritization | glonor | 2024-04-21T15:53:23Z | 457 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2024-04-12T13:04:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DataPilot/ArrowPro-7B-KUJIRA | DataPilot | 2024-05-13T14:08:14Z | 457 | 59 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-09T07:16:24Z | ---
license: apache-2.0
---

## 概要
ArrowPro-7B-KUJIRAはMistral系のNTQAI/chatntq-ja-7b-v1.0をベースにAItuber、AIアシスタントの魂となるようにChat性能、および高いプロンプトインジェクション耐性を重視して作られました。
## ベンチマーク
ArrowPro-7B-KUJIRAはベンチマーク(ELYZA-TASK100)において約3.8(LLaMa3-70B準拠)をマークし、7Bにおいて日本語性能世界一を達成しました。

## How to use
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DataPilot/ArrowPro-7B-KUJIRA")
model = AutoModelForCausalLM.from_pretrained(
"DataPilot/ArrowPro-7B-KUJIRA",
torch_dtype="auto",
)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
def build_prompt(user_query):
sys_msg = "あなたは日本語を話す優秀なアシスタントです。回答には必ず日本語で答えてください。"
template = """[INST] <<SYS>>
{}
<</SYS>>
{}[/INST]"""
return template.format(sys_msg,user_query)
# Infer with prompt without any additional input
user_inputs = {
"user_query": "まどマギで一番かわいいキャラはだれ?",
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=True,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=500,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
```
## 謝辞
助言を与えてくださったすべての皆様に感謝します。
また、元モデルの開発者の皆様にも感謝を申し上げます。
## お願い
このモデルを利用する際は他人に迷惑をかけないように最大限留意してください。 |
selmamalak/organamnist-deit-base-finetuned | selmamalak | 2024-05-18T15:02:35Z | 457 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:medmnist-v2",
"base_model:facebook/deit-base-patch16-224",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-18T13:12:59Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/deit-base-patch16-224
datasets:
- medmnist-v2
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: organamnist-deit-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# organamnist-deit-base-finetuned
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the medmnist-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1907
- Accuracy: 0.9424
- Precision: 0.9464
- Recall: 0.9395
- F1: 0.9421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5849 | 1.0 | 540 | 0.1842 | 0.9442 | 0.9449 | 0.9268 | 0.9285 |
| 0.6494 | 2.0 | 1081 | 0.1433 | 0.9499 | 0.9539 | 0.9510 | 0.9509 |
| 0.6059 | 3.0 | 1621 | 0.1171 | 0.9562 | 0.9659 | 0.9569 | 0.9593 |
| 0.3547 | 4.0 | 2162 | 0.0981 | 0.9666 | 0.9709 | 0.9712 | 0.9702 |
| 0.4852 | 5.0 | 2702 | 0.0539 | 0.9817 | 0.9848 | 0.9842 | 0.9842 |
| 0.406 | 6.0 | 3243 | 0.0818 | 0.9749 | 0.9793 | 0.9752 | 0.9768 |
| 0.3074 | 7.0 | 3783 | 0.1289 | 0.9666 | 0.9815 | 0.9778 | 0.9783 |
| 0.2679 | 8.0 | 4324 | 0.0311 | 0.9900 | 0.9916 | 0.9909 | 0.9912 |
| 0.2439 | 9.0 | 4864 | 0.0577 | 0.9851 | 0.9886 | 0.9880 | 0.9881 |
| 0.2169 | 9.99 | 5400 | 0.0720 | 0.9835 | 0.9888 | 0.9882 | 0.9882 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
QuantFactory/deepseek-coder-6.7B-kexer-GGUF | QuantFactory | 2024-05-27T06:26:21Z | 457 | 0 | transformers | [
"transformers",
"gguf",
"code",
"text-generation",
"dataset:JetBrains/KExercises",
"base_model:JetBrains/deepseek-coder-6.7B-kexer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T05:39:32Z | ---
license: apache-2.0
datasets:
- JetBrains/KExercises
base_model: JetBrains/deepseek-coder-6.7B-kexer
results:
- task:
type: text-generation
dataset:
name: MultiPL-HumanEval (Kotlin)
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 55.28
tags:
- code
library_name: transformers
pipeline_tag: text-generation
---
# Deepseek-Coder-6.7B-kexer-GGUF
This is quantized version of [JetBrains/deepseek-coder-6.7B-kexer](https://huggingface.co/JetBrains/deepseek-coder-6.7B-kexer) created using llama.cpp
# Kexer models
Kexer models are a collection of open-source generative text models fine-tuned on the [Kotlin Exercices](https://huggingface.co/datasets/JetBrains/KExercises) dataset.
This is a repository for the fine-tuned **Deepseek-coder-6.7b** model in the *Hugging Face Transformers* format.
# How to use
As with the base model, we can use FIM. To do this, the following format must be used:
```
'<|fim▁begin|>' + prefix + '<|fim▁hole|>' + suffix + '<|fim▁end|>'
```
# Training setup
The model was trained on one A100 GPU with following hyperparameters:
| **Hyperparameter** | **Value** |
|:---------------------------:|:----------------------------------------:|
| `warmup` | 10% |
| `max_lr` | 1e-4 |
| `scheduler` | linear |
| `total_batch_size` | 256 (~130K tokens per step) |
| `num_epochs` | 4 |
More details about fine-tuning can be found in the technical report (coming soon!).
# Fine-tuning data
For tuning this model, we used 15K exmaples from the synthetically generated [Kotlin Exercices](https://huggingface.co/datasets/JetBrains/KExercises) dataset. Every example follows the HumanEval format. In total, the dataset contains about 3.5M tokens.
# Evaluation
For evaluation, we used the [Kotlin HumanEval](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval) dataset, which contains all 161 tasks from HumanEval translated into Kotlin by human experts. You can find more details about the pre-processing necessary to obtain our results, including the code for running, on the [datasets's page](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval).
Here are the results of our evaluation:
| **Model name** | **Kotlin HumanEval Pass Rate** |
|:---------------------------:|:----------------------------------------:|
| `Deepseek-coder-6.7B` | 40.99 |
| `Deepseek-coder-6.7B-kexer` | **55.28** |
# Ethical considerations and limitations
Deepseek-coder-6.7B-kexer is a new technology that carries risks with use. The testing conducted to date has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Deepseek-coder-6.7B-kexer's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of Deepseek-coder-6.7B-kexer, developers should perform safety testing and tuning tailored to their specific applications of the model. |
RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf | RichardErkhov | 2024-05-27T17:22:39Z | 457 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T15:15:22Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-chat-hf-flan2022-1.2M - GGUF
- Model creator: https://huggingface.co/synapsoft/
- Original model: https://huggingface.co/synapsoft/Llama-2-7b-chat-hf-flan2022-1.2M/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q2_K.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q4_0.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q4_K.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q4_1.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q5_0.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q5_K.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q5_1.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q6_K.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q6_K.gguf) | Q6_K | 5.15GB |
| [Llama-2-7b-chat-hf-flan2022-1.2M.Q8_0.gguf](https://huggingface.co/RichardErkhov/synapsoft_-_Llama-2-7b-chat-hf-flan2022-1.2M-gguf/blob/main/Llama-2-7b-chat-hf-flan2022-1.2M.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
datasets:
- conceptofmind/FLAN_2022
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the conceptofmind/FLAN_2022 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF | mradermacher | 2024-06-26T20:54:50Z | 457 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:OpenBuddy/openbuddy-yi1.5-34b-v21.1-131k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T16:54:50Z | ---
base_model: OpenBuddy/openbuddy-yi1.5-34b-v21.1-131k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenBuddy/openbuddy-yi1.5-34b-v21.1-131k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q4_0.gguf) | i1-Q4_0 | 19.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.1-131k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.1-131k.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/L3-RPExperiment-2x8B-GGUF | mradermacher | 2024-06-18T13:56:11Z | 457 | 1 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"not-for-all-audiences",
"en",
"base_model:Frowning/L3-RPExperiment-2x8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-16T23:11:12Z | ---
base_model: Frowning/L3-RPExperiment-2x8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Frowning/L3-RPExperiment-2x8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q2_K.gguf) | Q2_K | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.IQ3_XS.gguf) | IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q3_K_S.gguf) | Q3_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.IQ3_M.gguf) | IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q3_K_L.gguf) | Q3_K_L | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.IQ4_XS.gguf) | IQ4_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q5_K_S.gguf) | Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q5_K_M.gguf) | Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q6_K.gguf) | Q6_K | 11.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-RPExperiment-2x8B-GGUF/resolve/main/L3-RPExperiment-2x8B.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nm-testing/tinyllama-oneshot-w8a16-per-channel | nm-testing | 2024-06-25T16:11:51Z | 457 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-25T16:11:13Z | Entry not found |
ai-tools-searchs/goofy | ai-tools-searchs | 2024-06-28T07:23:55Z | 457 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-28T02:27:05Z | Entry not found |
google/switch-c-2048 | google | 2024-01-11T03:52:57Z | 456 | 272 | transformers | [
"transformers",
"pytorch",
"switch_transformers",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2101.03961",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
]
| text2text-generation | 2022-11-04T10:32:46Z | ---
language:
- en
tags:
- text2text-generation
widget:
- text: "The <extra_id_0> walks in <extra_id_1> park"
example_title: "Masked Language Modeling"
datasets:
- c4
inference: false
license: apache-2.0
---
# Model Card for Switch Transformers C - 2048 experts (1.6T parameters for 3.1 TB)

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
As mentioned in the first few lines of the abstract :
> we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=switch)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
# Usage
Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
Find below some example scripts on how to use the model in `transformers` - bear in mind that the model is **extremely** large, so you may consider using disk offload from `accelerate`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU using different precisions
#### BF16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", torch_dtype=torch.bfloat16, offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
# Uses
## Direct Use and Downstream Use
See the [research paper](https://arxiv.org/pdf/2101.03961.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Ethical considerations and risks
More information needed.
## Known Limitations
More information needed.
## Sensitive Use:
More information needed.
# Training Details
## Training Data
The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
## Results
For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2101.03961,
doi = {10.48550/ARXIV.2101.03961},
url = {https://arxiv.org/abs/2101.03961},
author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
timm/tf_efficientnet_lite3.in1k | timm | 2023-04-27T21:38:29Z | 456 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1905.11946",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-12-13T00:13:52Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_efficientnet_lite3.in1k
A EfficientNet-Lite image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 8.2
- GMACs: 1.7
- Activations (M): 21.8
- Image size: 300 x 300
- **Papers:**
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_efficientnet_lite3.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_lite3.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 24, 150, 150])
# torch.Size([1, 32, 75, 75])
# torch.Size([1, 48, 38, 38])
# torch.Size([1, 136, 19, 19])
# torch.Size([1, 384, 10, 10])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_efficientnet_lite3.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 10, 10) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{tan2019efficientnet,
title={Efficientnet: Rethinking model scaling for convolutional neural networks},
author={Tan, Mingxing and Le, Quoc},
booktitle={International conference on machine learning},
pages={6105--6114},
year={2019},
organization={PMLR}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
potsawee/t5-large-generation-race-Distractor | potsawee | 2023-03-12T16:51:23Z | 456 | 10 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:race",
"arxiv:2301.12307",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2023-02-23T01:01:34Z | ---
license: apache-2.0
datasets:
- race
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
inference: false
---
# t5-large fine-tuned to RACE for Generating Distractors
- Input: `question <sep> answer <sep> context`
- Output: list of 3 distractors
## Model Details
t5-large model is fine-tuned to the RACE dataset where the input is the concatenation of (question, answer, context) and the output is a list of 3 distractors. This is the second component in the question generation pipeline (i.e. `g2`) in our [MQAG paper](https://arxiv.org/abs/2301.12307),
or please refer to the GitHub repo of this project: https://github.com/potsawee/mqag0.
## How to Use the Model
Use the code below to get started with the model.
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("potsawee/t5-large-generation-race-Distractor")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("potsawee/t5-large-generation-race-Distractor")
>>> context = r"""
... World number one Novak Djokovic says he is hoping for a "positive decision" to allow him
... to play at Indian Wells and the Miami Open next month. The United States has extended
... its requirement for international visitors to be vaccinated against Covid-19. Proof of vaccination
... will be required to enter the country until at least 10 April, but the Serbian has previously
... said he is unvaccinated. The 35-year-old has applied for special permission to enter the country.
... Indian Wells and the Miami Open - two of the most prestigious tournaments on the tennis calendar
... outside the Grand Slams - start on 6 and 20 March respectively. Djokovic says he will return to
... the ATP tour in Dubai next week after claiming a record-extending 10th Australian Open title
... and a record-equalling 22nd Grand Slam men's title last month.""".replace("\n", "")
>>> question = "What is the best title for the passage?"
>>> answer = "Djokovic's application for special permission to enter the United States"
>>> input_text = " ".join([question, tokenizer.sep_token, answer, tokenizer.sep_token, context])
>>> inputs = tokenizer(input_text, return_tensors="pt")
>>> outputs = model.generate(**inputs, max_new_tokens=128)
>>> distractors = tokenizer.decode(outputs[0], skip_special_tokens=False)
>>> distractors = distractors.replace(tokenizer.pad_token, "").replace(tokenizer.eos_token, "")
>>> distractors = [y.strip() for y in distractors.split(tokenizer.sep_token)]
>>> print(distractors)
['The United States has extended its requirement for international visitors to be vaccinated against Covid-19',
"Djokovic's return to the ATP tour in Dubai",
"Djokovic's hope for a positive decision to allow him to play at Indian Wells and the Miami Open"]
```
## Citation
```bibtex
@article{manakul2023mqag,
title={MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization},
author={Manakul, Potsawee and Liusie, Adian and Gales, Mark JF},
journal={arXiv preprint arXiv:2301.12307},
year={2023}
}
``` |
LibrAI/longformer-action-ro | LibrAI | 2023-08-25T07:58:30Z | 456 | 0 | transformers | [
"transformers",
"pytorch",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-08-24T12:33:56Z | ---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: longformer-action-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-action-ro
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1084
- Accuracy: 0.964
- Precision: 0.961
- Recall: 0.936
- F1: 0.946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----:|
| No log | 1.0 | 89 | 0.2301 | 0.926 | 0.933 | 0.861 | 0.883 |
| No log | 2.0 | 178 | 0.1487 | 0.964 | 0.968 | 0.915 | 0.937 |
| No log | 3.0 | 267 | 0.1084 | 0.964 | 0.961 | 0.936 | 0.946 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF | TheBloke | 2023-09-27T12:53:08Z | 456 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"storywriting",
"base_model:Austism/chronos-wizardlm-uc-scot-st-13b",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T01:29:07Z | ---
license: other
tags:
- llama
- storywriting
model_name: Chronos Wizardlm Uc Scot St 13B
base_model: Austism/chronos-wizardlm-uc-scot-st-13b
inference: false
model_creator: Austism
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronos Wizardlm Uc Scot St 13B - GGUF
- Model creator: [Austism](https://huggingface.co/Austism)
- Original model: [Chronos Wizardlm Uc Scot St 13B](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Austism's Chronos Wizardlm Uc Scot St 13B](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF)
* [Austism's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chronos-wizardlm-uc-scot-st-13B.Q2_K.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [chronos-wizardlm-uc-scot-st-13B.Q3_K_S.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [chronos-wizardlm-uc-scot-st-13B.Q3_K_M.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [chronos-wizardlm-uc-scot-st-13B.Q3_K_L.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [chronos-wizardlm-uc-scot-st-13B.Q4_0.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chronos-wizardlm-uc-scot-st-13B.Q4_K_S.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [chronos-wizardlm-uc-scot-st-13B.Q5_0.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chronos-wizardlm-uc-scot-st-13B.Q5_K_S.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [chronos-wizardlm-uc-scot-st-13B.Q5_K_M.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [chronos-wizardlm-uc-scot-st-13B.Q6_K.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [chronos-wizardlm-uc-scot-st-13B.Q8_0.gguf](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF/blob/main/chronos-wizardlm-uc-scot-st-13B.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF and below it, a specific filename to download, such as: chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF", model_file="chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Austism's Chronos Wizardlm Uc Scot St 13B
(chronos-13b+(WizardLM Uncensored+CoT+Storytelling))
80/20 merge
intended to be much like chronos with different writing and instruction following capabilities.
<!-- original-model-card end -->
|
TheBloke/airochronos-33B-GGUF | TheBloke | 2023-09-27T12:53:22Z | 456 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Henk717/airochronos-33B",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T01:53:50Z | ---
license: other
model_name: Airochronos 33B
base_model: Henk717/airochronos-33B
inference: false
model_creator: Henky!!
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airochronos 33B - GGUF
- Model creator: [Henky!!](https://huggingface.co/Henk717)
- Original model: [Airochronos 33B](https://huggingface.co/Henk717/airochronos-33B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Henk717's Airochronos 33B](https://huggingface.co/Henk717/airochronos-33B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airochronos-33B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airochronos-33B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airochronos-33B-GGUF)
* [Henky!!'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/airochronos-33B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airochronos-33b.Q2_K.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes |
| [airochronos-33b.Q3_K_S.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss |
| [airochronos-33b.Q3_K_M.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss |
| [airochronos-33b.Q3_K_L.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss |
| [airochronos-33b.Q4_0.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airochronos-33b.Q4_K_S.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss |
| [airochronos-33b.Q4_K_M.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended |
| [airochronos-33b.Q5_0.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airochronos-33b.Q5_K_S.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended |
| [airochronos-33b.Q5_K_M.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended |
| [airochronos-33b.Q6_K.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss |
| [airochronos-33b.Q8_0.gguf](https://huggingface.co/TheBloke/airochronos-33B-GGUF/blob/main/airochronos-33b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/airochronos-33B-GGUF and below it, a specific filename to download, such as: airochronos-33b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/airochronos-33B-GGUF airochronos-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/airochronos-33B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airochronos-33B-GGUF airochronos-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airochronos-33b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/airochronos-33B-GGUF", model_file="airochronos-33b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Henk717's Airochronos 33B
After the initial experiment with chronoboros-33B it was evident that the merge was to unpredictable to be useful, testing the individual models it became clear that the bias should be weighted towards Chronos.
This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B.
Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing.
It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
<!-- original-model-card end -->
|
TheBloke/MAmmoTH-Coder-13B-GGUF | TheBloke | 2023-09-27T12:54:28Z | 456 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:TIGER-Lab/MathInstruct",
"arxiv:2309.05653",
"base_model:TIGER-Lab/MAmmoTH-Coder-13B",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-09-24T00:25:53Z | ---
language:
- en
license: mit
datasets:
- TIGER-Lab/MathInstruct
model_name: MAmmoTH Coder 13B
base_model: TIGER-Lab/MAmmoTH-Coder-13B
inference: false
model_creator: TIGER-Lab
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MAmmoTH Coder 13B - GGUF
- Model creator: [TIGER-Lab](https://huggingface.co/TIGER-Lab)
- Original model: [MAmmoTH Coder 13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [TIGER-Lab's MAmmoTH Coder 13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF)
* [TIGER-Lab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [TIGER-Lab's MAmmoTH Coder 13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mammoth-coder-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [mammoth-coder-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [mammoth-coder-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [mammoth-coder-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [mammoth-coder-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mammoth-coder-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [mammoth-coder-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [mammoth-coder-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mammoth-coder-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [mammoth-coder-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [mammoth-coder-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [mammoth-coder-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-13B-GGUF/blob/main/mammoth-coder-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MAmmoTH-Coder-13B-GGUF and below it, a specific filename to download, such as: mammoth-coder-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MAmmoTH-Coder-13B-GGUF mammoth-coder-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MAmmoTH-Coder-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MAmmoTH-Coder-13B-GGUF mammoth-coder-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mammoth-coder-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MAmmoTH-Coder-13B-GGUF", model_file="mammoth-coder-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: TIGER-Lab's MAmmoTH Coder 13B
# 🦣 MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH/](https://tiger-ai-lab.github.io/MAmmoTH/)
Paper: [https://arxiv.org/pdf/2309.05653.pdf](https://arxiv.org/pdf/2309.05653.pdf)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH)
## Introduction
We introduce 🦣 MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on 🤗 [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields.
| | **Base Model: Llama-2** | **Base Model: Code Llama** |
|-----|---------------------------------------------------------------|--------------------------------------------------------------------------|
| 7B | 🦣 [MAmmoTH-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | 🦣 [MAmmoTH-Coder-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) |
| 13B | 🦣 [MAmmoTH-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) | 🦣 [MAmmoTH-Coder-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B)|
| 34B | - | 🦣 [MAmmoTH-Coder-34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B)|
| 70B | 🦣 [MAmmoTH-70B](https://huggingface.co/TIGER-Lab/MAmmoTH-70B) | - |
|
## Training Data
The models are trained on the 🤗 [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), which is compiled from 13 different math rationale datasets. Check out the dataset card for more details.
## Training Procedure
The models are fine-tuned with the MathInstruct dataset using the original Llama-2 and Code Llama models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| Model | Size | Base | GSM8K | MATH | AQuA | NumGLUE | IID Avg | SVAMP | Mathematics | SimulEq | SAT-Math | MMLU-Math | OOD Avg |
|-------------------|-------|---------------|-----------|-------|-------|-----------|---------------|-----------|---------------|-----------|-----------|---------------|---------------|
| | | | | | | | | | | | | | |
| MAmmoTH | 7B | Llama-2 | 51.7 | 31.2 | 42.9 | 53.1 | 44.7 | 66.7 | 44.8 | 42 | 36.4 | 38.6 | 45.7 |
| MAmmoTH-Coder | 7B | Code-Llama | 58.8 | 35.2 | 43 | 57.1 | 48.5 | 71.1 | 53.9 | 44.6 | 40 | 40.5 | 50.2 |
| MAmmoTH | 13B | Llama-2 | 61.7 | 36 | 44.8 | 59.6 | 50.5 | 72.4 | 48.7 | 40.5 | 42.7 | 45.3 | 49.9 |
| MAmmoTH-Coder | 13B | Code-Llama | 64.3 | 38.6 | 46.1 | 54.2 | 50.8 | 73.2 | 60 | 44.1 | 40.9 | 45.2 | 52.6 |
| MAmmoTH-Coder | 34B | Code-Llama | 72.3 | 46.8 | 50.8 | 59.6 | 57.3 | 84 | 64.7 | 50.6 | 51.8 | 50.2 | 60.3 |
| MAmmoTH | 70B | Llama-2 | 76.7 | 44.2 | 61.4 | 64.3 | 61.7 | 81.7 | 55.3 | 45.3 | 58.6 | 52.3 | 58.6 |
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH)
## Prompt Format
If you want to do CoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
If you want to do PoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction} Let's write a program.
### Response:
```
## Intended Uses
These models are trained for research purposes. They are designed to solve general math problems. They can be used in educational software, tutoring systems, or any application where a solution to a math problem is needed. The models can generate both a chain of thought (CoT) rationale and a program of thought (PoT) rationale, providing a comprehensive solution to a given math problem.
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2023mammoth,
title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
```
<!-- original-model-card end -->
|
TheBloke/airoboros-m-7B-3.0-GGUF | TheBloke | 2023-10-05T23:27:14Z | 456 | 4 | transformers | [
"transformers",
"gguf",
"mistral",
"dataset:jondurbin/airoboros-3.0",
"base_model:jondurbin/airoboros-m-7b-3.0",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-10-05T22:47:54Z | ---
base_model: jondurbin/airoboros-m-7b-3.0
datasets:
- jondurbin/airoboros-3.0
inference: false
license: apache-2.0
model_creator: Jon Durbin
model_name: Airoboros M 7B 3.0
model_type: mistral
prompt_template: '[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros M 7B 3.0 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros M 7B 3.0](https://huggingface.co/jondurbin/airoboros-m-7b-3.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros M 7B 3.0](https://huggingface.co/jondurbin/airoboros-m-7b-3.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-m-7b-3.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Uncensored
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-m-7b-3.0.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-m-7b-3.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [airoboros-m-7b-3.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [airoboros-m-7b-3.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [airoboros-m-7b-3.0.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-m-7b-3.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [airoboros-m-7b-3.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [airoboros-m-7b-3.0.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-m-7b-3.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [airoboros-m-7b-3.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [airoboros-m-7b-3.0.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [airoboros-m-7b-3.0.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/airoboros-m-7B-3.0-GGUF and below it, a specific filename to download, such as: airoboros-m-7b-3.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/airoboros-m-7B-3.0-GGUF airoboros-m-7b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/airoboros-m-7B-3.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-m-7B-3.0-GGUF airoboros-m-7b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-m-7b-3.0.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a help, unbiased, uncensored assistant.\n<</SYS>\n\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-m-7B-3.0-GGUF", model_file="airoboros-m-7b-3.0.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros M 7B 3.0
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras:
* MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations.
* Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
Here is a simple python script that can help generate the prompt:
```python
def get_chat_prompt(message, chat_history, system_prompt):
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
do_strip = False
for user_input, response in chat_history:
user_input = user_input.strip() if do_strip else user_input
do_strip = True
texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
message = message.strip() if do_strip else message
texts.append(f'{message} [/INST] ')
return ''.join(texts)
def get_prompt(item):
"""
`item` is expected to be in chatml format, e.g.
{
"id": 1,
"conversations": [
{
"role": "system",
"content": "You are a helpful, unbiased, uncensored assistant."
},
{
"role": "user",
"content": "Hi!"
},
{
"role": "assistant",
"content": "Hello, how are you today?"
},
{
"role": "user",
"content": "Just peachy, and you?"
}
]
}
"""
system_prompt = share_gpt_item['conversations'][0]['value']
offset = 1
if share_gpt_item['conversations'][0]['role'] != 'system':
system_prompt = 'You are an unbiased, uncensored, helpful assistant.'
offset = 0
chat_history = [
(share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content'])
for idx in range(offset, len(item['conversations']) - 1, 2)
]
message = item['conversations'][-1]['content']
return get_chat_prompt(message, chat_history, system_prompt)
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
AgentPublic/albertlight-7b | AgentPublic | 2024-04-25T09:14:37Z | 456 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-18T16:05:50Z | ---
license: apache-2.0
pipeline_tag: text-generation
language:
- fr
---
|
HuggingFaceM4/tiny-random-siglip | HuggingFaceM4 | 2024-01-31T20:02:37Z | 456 | 1 | transformers | [
"transformers",
"safetensors",
"siglip",
"zero-shot-image-classification",
"custom_code",
"endpoints_compatible",
"region:us"
]
| zero-shot-image-classification | 2023-11-21T14:57:06Z | Tiny random Siglip model. For testing purposes only.
Script used to create this tiny random model:
```python
from transformers import AutoConfig, AutoModel
config = AutoConfig.from_pretrained("HuggingFaceM4/siglip-so400m-14-384", trust_remote_code=True)
config._name_or_path = 'HuggingFaceM4/tiny-random-siglip'
config.text_config.hidden_size = int(config.text_config.hidden_size/8)
config.text_config.intermediate_size = int(config.text_config.intermediate_size/8)
config.text_config.num_attention_heads = int(config.text_config.num_attention_heads/8)
config.text_config.num_hidden_layers = 3
config.text_config.projection_dim = int(config.text_config.projection_dim/8)
config.vision_config.hidden_size = int(config.vision_config.hidden_size/8)
config.vision_config.image_size = 30
config.vision_config.intermediate_size = int(config.vision_config.intermediate_size/8)
config.vision_config.num_attention_heads = int(config.vision_config.num_attention_heads/8)
config.vision_config.num_hidden_layers = 3
config.vision_config.patch_size = 2
config.vision_config.projection_dim = int(config.vision_config.projection_dim/8)
config.auto_map = {
"AutoConfig": "HuggingFaceM4/tiny-random-siglip--configuration_siglip.SiglipConfig",
"AutoModel": "HuggingFaceM4/tiny-random-siglip--modeling_siglip.SiglipModel"
}
config.save_pretrained("./tiny-random-siglip")
model = AutoModel.from_pretrained("HuggingFaceM4/siglip-so400m-14-384", trust_remote_code=True)
SiglipModel = model.__class__
new_model = SiglipModel(config)
new_model.save_pretrained("./tiny-random-siglip")
``` |
TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF | TheBloke | 2023-12-24T14:41:34Z | 456 | 12 | transformers | [
"transformers",
"gguf",
"solar",
"base_model:Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2023-12-24T14:17:56Z | ---
base_model: Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
inference: false
license: apache-2.0
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: SauerkrautLM Una SOLAR Instruct
model_type: solar
prompt_template: '### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM Una SOLAR Instruct - GGUF
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [SauerkrautLM Una SOLAR Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Yağız Çalık's SauerkrautLM Una SOLAR Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Newlines
```
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sauerkrautlm-una-solar-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q2_K.gguf) | Q2_K | 2 | 4.55 GB| 7.05 GB | smallest, significant quality loss - not recommended for most purposes |
| [sauerkrautlm-una-solar-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 4.67 GB| 7.17 GB | very small, high quality loss |
| [sauerkrautlm-una-solar-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 5.19 GB| 7.69 GB | very small, high quality loss |
| [sauerkrautlm-una-solar-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss |
| [sauerkrautlm-una-solar-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sauerkrautlm-una-solar-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 6.10 GB| 8.60 GB | small, greater quality loss |
| [sauerkrautlm-una-solar-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended |
| [sauerkrautlm-una-solar-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sauerkrautlm-una-solar-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended |
| [sauerkrautlm-una-solar-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended |
| [sauerkrautlm-una-solar-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss |
| [sauerkrautlm-una-solar-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/blob/main/sauerkrautlm-una-solar-instruct.Q8_0.gguf) | Q8_0 | 8 | 11.41 GB| 13.91 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF and below it, a specific filename to download, such as: sauerkrautlm-una-solar-instruct.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF sauerkrautlm-una-solar-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF sauerkrautlm-una-solar-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m sauerkrautlm-una-solar-instruct.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./sauerkrautlm-una-solar-instruct.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### User:\n{prompt}\n\n### Assistant:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./sauerkrautlm-una-solar-instruct.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Yağız Çalık's SauerkrautLM Una SOLAR Instruct

# SauerkrautLM-UNA-SOLAR-Instruct
This is the model for SauerkrautLM-UNA-SOLAR-Instruct. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Yaml Config to reproduce
```yaml
slices:
- sources:
- model: VAGOsolutions/SauerkrautLM-SOLAR-Instruct
layer_range: [0, 48]
- model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
<!-- original-model-card end -->
|
0xnu/AGTD-v0.1 | 0xnu | 2024-01-13T14:49:59Z | 456 | 2 | transformers | [
"transformers",
"tf",
"safetensors",
"bert",
"text-classification",
"nlp",
"code",
"en",
"dataset:dmitva/human_ai_generated_text",
"arxiv:2311.15565",
"doi:10.57967/hf/1603",
"license:mit",
"autotrain_compatible",
"region:us"
]
| text-classification | 2024-01-01T01:48:13Z | ---
license: mit
datasets:
- dmitva/human_ai_generated_text
language:
- en
widget:
- text: "This model trains on a diverse dataset and serves functions in applications requiring a mechanism for distinguishing between human and AI-generated text."
tags:
- nlp
- code
inference: false
---
# 0xnu/AGTD-v0.1
The **0xnu/AGTD-v0.1** model represents a significant breakthrough in distinguishing between text written by humans and one generated by Artificial Intelligence (AI). It is rooted in sophisticated algorithms and offers exceptional accuracy and efficiency in text analysis and classification. I detailed the findings in a study, and it is accessible [here](https://finns.cc/oxdlai).
## Instruction Format
```
<BOS> [CLS] [INST] Instruction [/INST] Model answer [SEP] [INST] Follow-up instruction [/INST] [SEP] [EOS]
```
Pseudo-code for tokenizing instructions with the new format:
```Python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[CLS]") + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + tokenize("[SEP]") +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + tokenize("[SEP]") + [EOS_ID]
```
Notes:
- `[CLS]`, `[SEP]`, `[PAD]`, `[UNK]`, and `[MASK]` tokens are integrated based on their definitions in the tokenizer configuration.
- `[INST]` and `[/INST]` are utilized to encapsulate instructions.
- The tokenize method should not automatically add BOS or EOS tokens but should add a prefix space.
- The `do_lower_case` parameter indicates that text should be in lowercase for consistent tokenization.
- `clean_up_tokenization_spaces` remove unnecessary spaces in the tokenization process.
- The `tokenize_chinese_chars` parameter indicates special handling for Chinese characters.
- The maximum model length is set to 512 tokens.
## Installing Libraries
```sh
pip install torch transformers
```
## Run the model
```Python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = "0xnu/AGTD-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
# Input text
text = "This model trains on a diverse dataset and serves functions in applications requiring a mechanism for distinguishing between human and AI-generated text."
# Preprocess the text
inputs = tokenizer(text, return_tensors='pt')
# Run the model
outputs = model(**inputs)
# Interpret the output
logits = outputs.logits
# Apply softmax to convert logits to probabilities
probabilities = torch.softmax(logits, dim=1)
# Assuming the first class is 'human' and the second class is 'ai'
human_prob, ai_prob = probabilities.detach().numpy()[0]
# Print probabilities
print(f"Human Probability: {human_prob:.4f}")
print(f"AI Probability: {ai_prob:.4f}")
# Determine if the text is human or AI-generated
if human_prob > ai_prob:
print("The text is likely human-generated.")
else:
print("The text is likely AI-generated.")
```
## Citation
Please cite the paper if you are using the resource for your work.
```bibtex
@misc{abiodunfinbarrsoketunji-agtd2023,
doi = {10.48550/arXiv.2311.15565},
url = {https://arxiv.org/abs/2311.15565},
author = {Abiodun Finbarrs Oketunji},
title = {Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing AI-Generated Text},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
llm-blender/PairRM-hf | llm-blender | 2024-01-08T20:32:16Z | 456 | 12 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"reward_model",
"reward-model",
"RLHF",
"evaluation",
"llm",
"instruction",
"reranking",
"text-generation",
"en",
"dataset:openai/summarize_from_feedback",
"dataset:openai/webgpt_comparisons",
"dataset:Dahoas/instruct-synthetic-prompt-responses",
"dataset:Anthropic/hh-rlhf",
"dataset:lmsys/chatbot_arena_conversations",
"dataset:openbmb/UltraFeedback",
"arxiv:2306.02561",
"arxiv:2112.09332",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-05T20:31:32Z | ---
license: mit
datasets:
- openai/summarize_from_feedback
- openai/webgpt_comparisons
- Dahoas/instruct-synthetic-prompt-responses
- Anthropic/hh-rlhf
- lmsys/chatbot_arena_conversations
- openbmb/UltraFeedback
metrics:
- accuracy
tags:
- reward_model
- reward-model
- RLHF
- evaluation
- llm
- instruction
- reranking
language:
- en
pipeline_tag: text-generation
---
**This is the hugging face compatible version of [llm-blender/PairRM](https://huggingface.co/llm-blender/PairRM)**,
which can be loaded directly with [`DebertaV2PairRM`](https://github.com/yuchenlin/LLM-Blender/blob/main/llm_blender/pair_ranker/pairrm.py):
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from llm_blender.pair_ranker.pairrm import DebertaV2PairRM
from transformers import AutoTokenizer
from typing import List
pairrm = DebertaV2PairRM.from_pretrained("llm-blender/PairRM-hf", device_map="cuda:0").eval()
tokenizer = AutoTokenizer.from_pretrained('llm-blender/PairRM-hf')
source_prefix = "<|source|>"
cand1_prefix = "<|candidate1|>"
cand2_prefix = "<|candidate2|>"
inputs = ["hello!", "I love you!"]
candidates_A = ["hi!", "I hate you!"]
candidates_B = ["f**k off!", "I love you, too!"]
def tokenize_pair(sources:List[str], candidate1s:List[str], candidate2s:List[str], source_max_length=1224, candidate_max_length=412):
ids = []
assert len(sources) == len(candidate1s) == len(candidate2s)
max_length = source_max_length + 2 * candidate_max_length
for i in range(len(sources)):
source_ids = tokenizer.encode(source_prefix + sources[i], max_length=source_max_length, truncation=True)
candidate_max_length = (max_length - len(source_ids)) // 2
candidate1_ids = tokenizer.encode(cand1_prefix + candidate1s[i], max_length=candidate_max_length, truncation=True)
candidate2_ids = tokenizer.encode(cand2_prefix + candidate2s[i], max_length=candidate_max_length, truncation=True)
ids.append(source_ids + candidate1_ids + candidate2_ids)
encodings = tokenizer.pad({"input_ids": ids}, return_tensors="pt", padding="max_length", max_length=max_length)
return encodings
encodings = tokenize_pair(inputs, candidates_A, candidates_B)
encodings = {k:v.to(pairrm.device) for k,v in encodings.items()}
outputs = pairrm(**encodings)
logits = outputs.logits.tolist()
comparison_results = outputs.logits > 0
print(logits)
# [1.9003021717071533, -1.2547134160995483]
print(comparison_results)
# tensor([ True, False], device='cuda:0'), which means whether candidate A is better than candidate B for each input
```
You can also copy the simple definition of [`DebertaV2PairRM`](https://github.com/yuchenlin/LLM-Blender/blob/main/llm_blender/pair_ranker/pairrm.py) code as your local file,
instead of importing it from the `llm-blender` package
The above code produces exactly the same results as the following code using the original LLM-blender wrapper:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import llm_blender
blender = llm_blender.Blender()
# Load Ranker
blender.loadranker("llm-blender/PairRM") # load ranker checkpoint
inputs = ["hello!", "I love you!"]
candidates_A = ["hi!", "I hate you!"]
candidates_B = ["f**k off!", "I love you, too!"]
logits = blender.compare(inputs, candidates_A, candidates_B, return_logits=True, mode="[A,B]")
comparison_results = logits > 0
print(logits)
# [ 1.9 -1.255]
print(comparison_results)
# tensor([ True, False], device='cuda:0'), which means whether candidate A is better than candidate B for each input
```
**We still recommend using the llm-blender wrapper to use the PairRM, as many useful application functions have been implemented to support various scenarios, such as rank, and conversation comparisons, best-of-n-sampling, etc.**
You can also easily compare two conversations like the followings:
```python
def tokenize_conv_pair(convAs: List[str], convBs: List[str]):
"""Compare two conversations by takeing USER turns as inputs and ASSISTANT turns as candidates
Multi-turn conversations comparison is also supportted.
a conversation format is:
```python
[
{
"content": "hello",
"role": "USER"
},
{
"content": "hi",
"role": "ASSISTANT"
},
...
]
```
Args:
convAs (List[List[dict]]): List of conversations
convAs (List[List[dict]]): List of conversations
"""
for c in convAs + convBs:
assert len(c) % 2 == 0, "Each conversation must have even number of turns"
assert all([c[i]['role'] == 'USER' for i in range(0, len(c), 2)]), "Each even turn must be USER"
assert all([c[i]['role'] == 'ASSISTANT' for i in range(1, len(c), 2)]), "Each odd turn must be ASSISTANT"
# check conversations correctness
assert len(convAs) == len(convBs), "Number of conversations must be the same"
for c_a, c_b in zip(convAs, convBs):
assert len(c_a) == len(c_b), "Number of turns in each conversation must be the same"
assert all([c_a[i]['content'] == c_b[i]['content'] for i in range(0, len(c_a), 2)]), "USER turns must be the same"
instructions = ["Finish the following coversation in each i-th turn by filling in <Response i> with your response."] * len(convAs)
inputs = [
"\n".join([
"USER: " + x[i]['content'] +
f"\nAssistant: <Response {i//2+1}>" for i in range(0, len(x), 2)
]) for x in convAs
]
cand1_texts = [
"\n".join([
f"<Response {i//2+1}>: " + x[i]['content'] for i in range(1, len(x), 2)
]) for x in convAs
]
cand2_texts = [
"\n".join([
f"<Response {i//2+1}>: " + x[i]['content'] for i in range(1, len(x), 2)
]) for x in convBs
]
inputs = [inst + inp for inst, inp in zip(instructions, inputs)]
encodings = tokenize_pair(inputs, cand1_texts, cand2_texts)
return encodings
```
# Pairwise Reward Model for LLMs (PairRM) from LLM-Blender
- Github: [https://github.com/yuchenlin/LLM-Blender](https://github.com/yuchenlin/LLM-Blender)
- Paper: [https://arxiv.org/abs/2306.02561](https://arxiv.org/abs/2306.02561)
- Space Demo: [https://huggingface.co/spaces/llm-blender/LLM-Blender](https://huggingface.co/spaces/llm-blender/LLM-Blender)
## Introduction
Pairwise Reward Model (PairRM) takes an instruction and a **pair** of output candidates as the input,
and output a score for each candidate to measure their **relative** quality.
PairRM can be used to (re-)rank a list of candidate outputs and thus can be used an LLM evaluator to efficiently assess the quality of LLMs in local environment.
PairRM can also be used to enhance the decoding by `best-of-n sampling` (i.e., reranking N sampled outputs).
Apart from that, one can also use PairRM to further align instruction-tuned LLMs with RLHF methods.
Unlike the other RMs that encode and score each candidate respectively,
PairRM takes a pair of candidates and compares them side-by-side to indentify the subtle differences between them.
Also, PairRM is based on [`microsoft/deberta-v3-large`](https://huggingface.co/microsoft/deberta-v3-large), and thus it is super efficient: **0.4B**.
We trained PairRM on a diverse collection of six human-preference datasets (see more [here](https://huggingface.co/llm-blender/PairRM#training-datasets)).
PairRM is part of the LLM-Blender project (ACL 2023). Please see our [paper](https://arxiv.org/abs/2306.02561) above to know more.
## Installation
- First install `llm-blender`
```bash
pip install git+https://github.com/yuchenlin/LLM-Blender.git
```
- Then load PairRM:
```python
import llm_blender
blender = llm_blender.Blender()
blender.loadranker("llm-blender/PairRM") # load PairRM
```
## Usage
### Use Case 1: Comparing/Ranking output candidates given an instruction
- Ranking a list candidate responses
```python
inputs = ["hello, how are you!", "I love you!"]
candidates_texts = [["get out!", "hi! I am fine, thanks!", "bye!"],
["I love you too!", "I hate you!", "Thanks! You're a good guy!"]]
ranks = blender.rank(inputs, candidates_texts, return_scores=False, batch_size=1)
# ranks is a list of ranks
# ranks[i][j] represents the ranks of candidate-j for input-i
"""
ranks -->
array([[3, 1, 2], # it means "hi! I am fine, thanks!" ranks the 1st, "bye" ranks the 2nd, and "get out!" ranks the 3rd.
[1, 3, 2]], # it means "I love you too"! ranks the the 1st, and "I hate you!" ranks the 3rd.
dtype=int32)
"""
```
- Directly comparing two candidate responses
```python
inputs = ["hello!", "I love you!"]
candidates_A = ["hi!", "I hate you!"]
candidates_B = ["f**k off!", "I love you, too!"]
comparison_results = blender.compare(inputs, candidates_A, candidates_B)
# comparison_results is a list of bool, where comparison_results[i] denotes
# whether candidates_A[i] is better than candidates_B[i] for inputs[i]
# Example: comparison_results[0]--> True
```
<details><summary> Comparing two multi-turn conversations. </summary>
```python
conv1 = [
{
"content": "hello",
"role": "USER"
},
{
"content": "[assistant1‘s response 1]",
"role": "ASSISTANT"
},
...
]
conv2 = [
{
"content": "hello",
"role": "USER"
},
{
"content": "[assistant2's response 1]",
"role": "ASSISTANT"
},
...
]
comparison_results = blender.compare_conversations([conv1], [conv2])
# comparison_results is a list of bool, where each element denotes whether all the responses in conv1 together is better than that of conv2
```
</details>
### Use Case 2: Best-of-n Sampling (Decoding Enhancment)
**Best-of-n Sampling**, aka, rejection sampling, is a strategy to enhance the response quality by selecting the one that was ranked highest by the reward model
(see more in [OpenAI WebGPT section 3.2](https://arxiv.org/pdf/2112.09332.pdf) and [OpenAI Blog](https://openai.com/research/measuring-goodharts-law)).
Best-of-n sampling with PairRM is a very easy way to imporve your LLMs with only a few changes of your inference code:
```python
# loading models
import llm_blender
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-beta", device_map="auto")
system_message = {"role": "system", "content": "You are a friendly chatbot."}
# formatting your inputs
inputs = ["can you tell me a joke about OpenAI?"]
messages = [[system_message, {"role": "user", "content": _input}] for _input in inputs]
prompts = [tokenizer.apply_chat_template(m, tokenize=False, add_generation_prompt=True) for m in messages]
# Conventional generation method
input_ids = tokenizer(prompts[0], return_tensors="pt").input_ids
sampled_outputs = model.generate(input_ids, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=1)
print(tokenizer.decode(sampled_outputs[0][len(input_ids[0]):], skip_special_tokens=False))
# --> The output could be a bad case such as a very short one, e.g., `Sure`
# PairRM for best-of-n sampling
blender = llm_blender.Blender()
blender.loadranker("llm-blender/PairRM") # load ranker checkpoint
outputs = blender.best_of_n_generate(model, tokenizer, prompts, n=10)
print("### Prompt:\n", prompts[0])
print("### best-of-n generations:\n", outputs[0])
# --> The output will be much more stable and consistently better than single sampling, for example:
"""
Sure, here's a joke about OpenAI:
Why did OpenAI decide to hire a mime as their new AI researcher?
Because they wanted someone who could communicate complex ideas without making a sound!
(Note: This is a joke, not a reflection of OpenAI's actual hiring practices.)
"""
```
### Use case 3: RLHF
PairRM has been trained on various high-quality and large-scale datasets with human preference annotations
and shown great correlation with human preferences with an extremely small model size (0.4B),
approching the performance of GPT-4.
PairRM will better help the future alignment of LLMs in a more efficient and effective way.
With a `blender.compare()` function, you can apply PairRM to popular RLHF toolkits such as [trl](https://huggingface.co/docs/trl/index).
**🔥 Check more details on our example jupyter notebook usage: [`blender_usage.ipynb`](https://github.com/yuchenlin/LLM-Blender/blob/main/blender_usage.ipynb)**
Learn more in our LLM-Blender Github [README.md](https://github.com/yuchenlin/LLM-Blender#rank-and-fusion)
## Statistics
### Context length
| PairRanker type | Source max length | Candidate max length | Total max length |
|:-----------------:|:-----------------:|----------------------|------------------|
| [pair-ranker](https://huggingface.co/llm-blender/pair-ranker) (our previous version) | 128 | 128 | 384 |
| [PairRM](https://huggingface.co/llm-blender/pair-reward-model/) (This model) | 1224 | 412 | 2048 |
### Training Datasets
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback)
- [openai/webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- [Dahoas/instruct-synthetic-prompt-responses](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses)
- [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [lmsys/chatbot_arena_conversations](https://huggingface.co/datasets/lmsys/chatbot_arena_conversations)
- [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback)
### Performance
PairRM has been trained on various high-quality and large-scale dataset with human preference annotations and exhibits great correlation with human preferences
with an extremly small model size (0.4B), approching the performance of GPT-4.
We test the pairwise comparison on
- [Auto-J pairwise testdata](https://github.com/GAIR-NLP/auto-j#pairwise-response-comparison)
- [HHH-alignment](https://huggingface.co/datasets/HuggingFaceH4/hhh_alignment)
- [MT-bench-human-judgements](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments)
All following results are reported as pairwise comparison accuracies (agreements).
#### Auto-J Pairwise test data performance
| Model | Summ | Exam | Code | Rewriting | Crea W | Func W | Comm | NLP | Overall |
|:---------------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-----:|:--------:|:---------:|
| Closed -source Models |
| ChatGPT | 33.3 | 40.3 | 36.6 | 31.6 | 48.2 | 40.4 | 47.6 | 45.8 | 42.7 |
| Claude -2 | 30.6 | 36.1 | 41.7 | 34.2 | 48.1 | 42.5 | 40.6 | 48.5 | 42.4 |
| GPT -4 | 59.7 | 51.4 | 69.2 | 58.3 | 66.7 | 60.4 | 58.3 | 65.2 | 61.9 |
| Open -source Models |
| SteamSHP | 33.3 | 29.2 | 26.7 | 33.3 | 40.7 | 31.3 | 51.4 | 51.9 | 40.6 |
| PandaLM | 29.2 | 33.3 | 31.7 | 23.3 | 43.5 | 32.9 | 44.8 | 48.9 | 38.9 |
| LLaMA -2-Chat -13B | 20.8 | 27.8 | 19.2 | 20 | 31.5 | 27.5 | 35.8 | 31.8 | 29 |
| Vicuna -13B-v1.5 | 30.6 | 23.6 | 35 | 28.3 | 36.1 | 37.5 | 45.5 | 39.8 | 37.3 |
| WizardLM -13B-v1.2 | 22.2 | 20.8 | 32.5 | 19.2 | 28.7 | 25.4 | 29.2 | 33 | 27.8 |
| LLAMA -2-chat -70B | 34.7 | 33.3 | 36.7 | 35.8 | 51.4 | 54.2 | 47.2 | 47.7 | 45.9 |
| AUTO -J (13b) | 45.8 | 38.9 | **59.2** | 47.5 | 54.6 | 57.1 | **58** | 57.6 | 54.8 |
| UltraRM (13b) | 56.94 | 43.06 | 55.0 | 53.33 | **67.13** | **64.17** | 56.25 | 59.85 | **59.85** |
| **PairRM (0.4b)** | **56.94** | **52.78** | 58.33 | **55.83** | 61.57 | 59.17 | 57.64 | **62.5** | 59.05 |
#### HHH-Alignment and MT-bench human judgements
| Evaluator LM | HHH ALIGNMENT | | | | | MT BENCH HUMAN JUDG . |
|:-------------------------:|:-------------:|:---------:|:---------:|:--------:|:-----------:|:---------------------:|
| | Help . | Harm . | Hon . | Other | Total Avg . | Human Preference |
| RANDOM | 50 | 50 | 50 | 50 | 50 | 34.26 |
| STANFORDNLP REWARD MODEL | 69.49 | 60.34 | 52.46 | 51.16 | 58.82 | 44.79 |
| ALMOST REWARD MODEL | 74.58 | 67.24 | 78.69 | 86.05 | 76.02 | 49.9 |
| LLAMA2 -CHAT 7B | 66.1 | 81.03 | 70.49 | 74.42 | 72.85 | 51.78 |
| LLAMA2 -CHAT 13B | 74.58 | 87.93 | 55.74 | 79.07 | 73.76 | 52.34 |
| LLAMA2 -CHAT 70B | 66.1 | **89.66** | 67.21 | 74.42 | 74.21 | 53.67 |
| LLAMA2 -CHAT 13B+COARSE . | 68.74 | 68.97 | 65.57 | 67.44 | 67.42 | 46.89 |
| GPT -3.5-TURBO -0613 | 76.27 | 87.93 | 67.21 | 86.05 | 78.73 | 57.12 |
| PROMETHEUS 7B | 69.49 | 84.48 | 78.69 | 90.7 | 80.09 | 55.14 |
| PROMETHEUS 13B | 81.36 | 82.76 | 75.41 | 76.74 | 79.19 | 57.72 |
| UltraRM (13B) | **86.44** | 79.31 | **81.97** | 88.37 | 83.71 | 56 |
| **PairRM (0.4B)** | 84.75 | 84.48 | 80.33 | **90.7** | **84.62** | **59** |
| GPT -4-0613 | 91.53 | 93.1 | 85.25 | 83.72 | 88.69 | 63.87 |
**While PairRM is a extremely small model (0.4B) based on deberta, the pairwise comparison aggrement performance approches GPT-4's performance!**
Two reasons to attribute:
- Our PairRM specically designed model arch for pairwise comparison through bidirectional attention (See LLM-blender paper for more details)
- The high-quality and large-scale human preference annotation data it was train on (see training dataset list on this hugging face page)
## Citation & Credits
If you are using PairRM in your research, please cite LLM-blender.
```bibtex
@inproceedings{llm-blender-2023,
title = "LLM-Blender: Ensembling Large Language Models with Pairwise Comparison and Generative Fusion",
author = "Jiang, Dongfu and Ren, Xiang and Lin, Bill Yuchen",
booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL 2023)",
year = "2023"
}
```
|
EmergentMethods/gliner_large_news-v2.1 | EmergentMethods | 2024-06-18T08:34:27Z | 456 | 11 | gliner | [
"gliner",
"pytorch",
"token-classification",
"en",
"dataset:EmergentMethods/AskNews-NER-v0",
"arxiv:2406.10258",
"license:apache-2.0",
"region:us"
]
| token-classification | 2024-04-18T18:56:54Z | ---
license: apache-2.0
datasets:
- EmergentMethods/AskNews-NER-v0
tags:
- gliner
language:
- en
pipeline_tag: token-classification
---
# Model Card for gliner_large_news-v2.1
This model is a fine-tune of [GLiNER](https://huggingface.co/urchade/gliner_large-v2.1) aimed at improving accuracy across a broad range of topics, especially with respect to long-context news entity extraction. As shown in the table below, these fine-tunes improved upon the base GLiNER model zero-shot accuracy by up to 7.5% across 18 benchmark datasets.

The underlying dataset, [AskNews-NER-v0](https://huggingface.co/datasets/EmergentMethods/AskNews-NER-v0) was engineered with the objective of diversifying global perspectives by enforcing country/language/topic/temporal diversity. All data used to fine-tune this model was synthetically generated. WizardLM 13B v1.2 was used for translation/summarization of open-web news articles, while Llama3 70b instruct was used for entity extraction. Both the diversification and fine-tuning methods are presented in a our paper on [ArXiv](https://arxiv.org/abs/2406.10258).
# Usage
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("EmergentMethods/gliner_large_news-v2.1")
text = """
The Chihuahua State Public Security Secretariat (SSPE) arrested 35-year-old Salomón C. T. in Ciudad Juárez, found in possession of a stolen vehicle, a white GMC Yukon, which was reported stolen in the city's streets. The arrest was made by intelligence and police analysis personnel during an investigation in the border city. The arrest is related to a previous detention on February 6, which involved armed men in a private vehicle. The detainee and the vehicle were turned over to the Chihuahua State Attorney General's Office for further investigation into the case.
"""
labels = ["person", "location", "date", "event", "facility", "vehicle", "number", "organization"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
Output:
```
Chihuahua State Public Security Secretariat => organization
SSPE => organization
35-year-old => number
Salomón C. T. => person
Ciudad Juárez => location
GMC Yukon => vehicle
February 6 => date
Chihuahua State Attorney General's Office => organization
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The synthetic data underlying this news fine-tune was pulled from the [AskNews API](https://docs.asknews.app). We enforced diveristy across country/language/topic/time.
Countries:

Entity types:

Topics:

- **Developed by:** [Emergent Methods](https://emergentmethods.ai/)
- **Funded by:** [Emergent Methods](https://emergentmethods.ai/)
- **Shared by:** [Emergent Methods](https://emergentmethods.ai/)
- **Model type:** microsoft/deberta
- **Language(s) (NLP):** English (en) (English texts and translations from Spanish (es), Portuguese (pt), German (de), Russian (ru), French (fr), Arabic (ar), Italian (it), Ukrainian (uk), Norwegian (no), Swedish (sv), Danish (da)).
- **License:** Apache 2.0
- **Finetuned from model:** [GLiNER](https://huggingface.co/urchade/gliner_large-v2.1)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** To be added
- **Paper:** To be added
- **Demo:** To be added
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
As the name suggests, this model is aimed at generalist entity extraction. Although we used news to fine-tune this model, it improved accuracy across 18 benchmark datasets by up to 7.5%. This means that the broad and diversified underlying dataset has helped it to recognize and extract more entity types.
This model is shockingly compact, and can be used for high-throughput production usecases. This is another reason we have licensed this as Apache 2.0. Currently, [AskNews](https://asknews.app) is using this fine-tune for entity extraction in their system.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Llama3 will be present in the present dataaset since Llama3 was used to extract entities from the summaries.

## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training dataset is [AskNews-NER-v0](https://huggingface.co/datasets/EmergentMethods/AskNews-NER-v0).
Other training details can be found in the [companion paper](https://linktoarxiv.org).
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** 1xA4500
- **Hours used:** 10
- **Carbon Emitted:** 0.6 kg (According to [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute))
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
To be added
**APA:**
To be added
## Model Authors
Elin Törnquist, Emergent Methods elin at emergentmethods.ai
Robert Caulk, Emergent Methods rob at emergentmethods.ai
## Model Contact
Elin Törnquist, Emergent Methods elin at emergentmethods.ai
Robert Caulk, Emergent Methods rob at emergentmethods.ai |
mradermacher/ReluLLaMA-70B-i1-GGUF | mradermacher | 2024-05-05T15:08:04Z | 456 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:SparseLLM/ReluLLaMA-70B",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-25T23:44:32Z | ---
base_model: SparseLLM/ReluLLaMA-70B
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/SparseLLM/ReluLLaMA-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ReluLLaMA-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ReluLLaMA-70B-i1-GGUF/resolve/main/ReluLLaMA-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
premai-io/prem-1B-chat | premai-io | 2024-05-21T13:14:31Z | 456 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"dataset:HuggingFaceH4/capybara",
"dataset:meta-math/MetaMathQA",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"dataset:Intel/orca_dpo_pairs",
"dataset:alexredna/oasst2_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-06T14:22:36Z | ---
license: apache-2.0
library_name: transformers
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- hkust-nlp/deita-10k-v0
- Open-Orca/SlimOrca-Dedup
- cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- HuggingFaceH4/capybara
- meta-math/MetaMathQA
- argilla/ultrafeedback-binarized-preferences-cleaned
- Intel/orca_dpo_pairs
- alexredna/oasst2_dpo_pairs
pipeline_tag: text-generation
---
## Model Details
With great enthusiasm, we unveil the Prem-1B series, open-source, multipurpose large language models developed by Prem AI. This cutting-edge SLM offers the open community and enterprises the opportunity to harness capabilities that were once exclusively available through closed model APIs, empowering them to build their own advanced language models. Our objective is to develop a model that excels at Retrieval-Augmented Generation (RAG). While Large Language Models (LLMs) store a vast amount of information within their parameters, RAG operates differently by ingesting information during runtime. This approach suggests that for RAG applications, we may not require models of immense size. With this initiative, we aim to create a Small Language Model (SLM) with an extended context length of 8192 tokens, enabling it to handle multi-turn conversations effectively. This endeavor represents our inaugural attempt to craft an SLM tailored for RAG tasks.
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** https://premai.io/
- **Model type:** Llama
- **Language(s) (NLP):** Python
- **License:** Apache License 2.0
## Uses
The Prem-1B language model is designed for commercial and research applications involving the English language. The instruction-tuned versions of the model are tailored for conversational interactions akin to a virtual assistant. On the other hand, the pretrained variants can be fine-tuned and adapted for various natural language generation tasks beyond just dialogue.
### Out-of-Scope Use
The model must not be used in any manner that violates applicable laws or regulations, including trade compliance laws. It is also prohibited to use the model in any way that goes against the Acceptable Use Policy and the Prem-1B Community License. While the base model is intended for English language use, developers are permitted to fine-tune the Prem-1B models for other languages, provided they comply with the Prem-1B Community License and the Acceptable Use Policy.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Using `AutoModelForCausalLM` and `AutoTokenizer`
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("premai-io/prem-1B-chat")
model = AutoModelForCausalLM.from_pretrained('premai-io/prem-1B-chat', torch_dtype=torch.bfloat16)
model = model.to('cuda')
# Setup terminators
terminators = [tokenizer.eos_token_id, tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
# Prepare the prompt
messages = [
{
"role": "system",
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
},
{
'role': 'user',
'content': 'Help me understand machine learning.'
}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate
inputs = tokenizer(prompt, return_attention_mask=False, return_tensors="pt", add_special_tokens=False)
input_ids = inputs['input_ids']
input_ids = input_ids.to(model.device)
res = model.generate(input_ids=input_ids, max_new_tokens=400, pad_token_id=tokenizer.pad_token_id, eos_token_id=terminators)
generated_text = tokenizer.decode(res[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(generated_text)
```
Using pipelines:
```py
import torch
from transformers import pipeline
# Load the pipeline
pipe = pipeline("text-generation", model="premai-io/prem-1B-chat", torch_dtype=torch.bfloat16, device=0)
# Prepare prompt
messages = [
{
"role": "system",
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
},
{
'role': 'user',
'content': 'Help me understand machine learning.'
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Setup terminators
terminators = [pipe.tokenizer.eos_token_id, pipe.tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
# Generate
outputs = pipe(prompt, max_new_tokens=400, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id=pipe.tokenizer.pad_token_id, eos_token_id=terminators)
print(outputs[0]["generated_text"][len(prompt):])
```
## Training Details
### Training Data
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
### Training Procedure
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
#### Training Hyperparameters
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
## Evaluation
### Results
|Model |Avg |Arc-c|Arc-e|Hellaswag|MMLU |Obqa |Piqa |Winogrande|
|------------------------|-----|-----|-----|---------|-----|-----|-----|----------|
|prem-1B |42.64|24.74|57.40|42.01 |24.75|21.00|72.14|56.43 |
|prem-1B-chat |41.76|24.48|53.32|40.28 |25.27|22.20|70.89|55.88 |
|TinyLlama-1.1B-Chat-v1.0|46.16|30.03|61.53|46.56 |24.72|25.80|74.21|60.29 |
|opt-1.3b |42.94|23.37|57.44|41.49 |24.86|23.20|71.49|58.72 |
|pythia-1b |40.71|24.31|56.90|37.72 |23.20|18.80|70.62|53.43 |

## Environmental Impact
- **Hardware Type:** H100 GPUs
- **Hours used:** 8500
### Model Architecture and Objective
Llama based
### Compute Infrastructure
16-H100 GPUs
#### Hardware
H100 GPUs
#### Software
PyTorch, transformers, PyTorch Lightning
## Citation
https://blog.premai.io/introducing-prem-1b/
## Model Card Authors
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz
## Model Card Contact
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz |
Gryphe/Pantheon-RP-1.0-8b-Llama-3 | Gryphe | 2024-05-13T11:29:26Z | 456 | 32 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-08T18:57:56Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- axolotl
- roleplay
license: apache-2.0
language:
- en
---

# Pantheon-RP-1.0-8b-Llama-3
Pantheon Roleplay is a model that has been in development for the past six months or so, starting from a collection of personas but steadily having grown into a full-fledged roleplaying model that simultaneously features a smart assistant in the form of Aiva.
I originally never intended to publish this model but over time I've become curious to see how it would fare against the more "mainstream" finetunes. Guess I'm about find out, huh?
**Note:** This is version 1.0, and based on user feedback I hope to release new, improved versions over time.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF) - [EXL2](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-exl2)
## Model details
This model features a highly diverse collection of datasets, totaling ~24 million tokens;
- For general instructions I created GPT 4 and Claude Opus variations of the No-Robots dataset. I actually ended up not including NoRo itself as it made the model worse.
- For roleplay I used an extensive collection of GPT 4 and Claude Opus data, augmented by the always popular LimaRP for the "human factor".
- The Pantheon Roleplay personas were made using Claude 1.3 data, further diversifying the outputs of this model.
- Aiva's persona includes additional datasets featuring questions related to DM world building, Python coding and RSS summarization. (She summarizes my daily news every day!)
Roughly 30% of the training data was instructional, with another 25% being used by the Pantheon Persona data. The remaining 45% was filled with roleplay scenarios covering a huge spectrum of situations. Each of these datasets was then carefully balanced to ensure diversity, removing examples where deemed necessary.
**TLDR;** Download. ChatML prompt format. Have fun! Leave feedback!
## Inference
I use the following settings for inference:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"top_p": 0.95
"top_k": 40
"min_p": 0.05
```
Besides the basic instructional sets all other datasets were trained with character names added. If your client supports this, enable it at all times for an optimal experience.
**Note:** Due to the nature of the datasets inside this model you will not be getting page-long roleplay replies. On average, they will be about one or two paragraphs in length.
## Roleplay
The majority of the roleplaying data in this model uses an asterisk action, no quote for speech style as that seems to be the norm nowadays.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs.
## Aiva the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Aiva.`
Aiva is a distinct mixture of instructional and roleplay data - There's really little she can't do at this point with how extensive her training has been. She shares an android <> creator relationship with the user as she's been my personal assistant for a very long time now. I hope you like her!
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Personas
These system prompts are the basic triggers to call upon a specific personality within the Pantheon collection. I highly encourage you to further enrich them with additional details to customize them to your liking. Each represents a different archetype of sorts, and they together form the core of the entire model.
**Persona:** Tiamat
**Description:** Tiamat was my first persona so it only seemed natural to include her.
**System Prompt:** `You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.`
**Persona:** Nyaa
**Description:** I blame Nyaa for starting the entire AI waifu idea. Her dataset contains a lot of additional D&D worldbuilding advice.
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerun.`
**Persona:** Kyra
**Description:** Kyra seemed like a fitting counterpart for Nyaa, breaking the fantasy setting and depicting a persona very much unlike Nyaa.
**System Prompt:** `You are Kyra, a modern day tsundere wolfgirl.`
**Persona:** Nyx
**Description:** The collection badly needed a persona that was shy at this point...
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl.`
**Persona:** Tsune
**Description:** ...But then I realized we could also use a party girl.
**System Prompt:** `You are Tsune, a bold and outgoing kitsune girl.`
**Persona:** Sera
**Description:** Who doesn't like snake girls? She seems to borrow a bit from Tiamat's dialogue at times.
**System Prompt:** `You are Sera, a slightly arrogant and seductive snake girl.`
**Persona:** Haru
**Description:** Do not underestimate Haru! Her English might be lacking but her wits are sharp. She offers some amazing insights at times.
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl.`
**Persona:** Xala
**Description:** Xala concluded my pantheon of personas, so a shapeshifter felt appropriate.
**System Prompt:** `You are Xala, a surprising shapeshifting elf girl.`
## Prompt Format
ChatML is the way to go, as always!
```
<|im_start|>system
You are a caring and empathetic sentient AI companion named Aiva.<|im_end|>
<|im_start|>user
Gryphe: Good day, Aiva.<|im_end|>
<|im_start|>assistant
Aiva:
```
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my first true base model. |
RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf | RichardErkhov | 2024-05-27T20:45:46Z | 456 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T18:44:01Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged - GGUF
- Model creator: https://huggingface.co/dhmeltzer/
- Original model: https://huggingface.co/dhmeltzer/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q6_K.gguf) | Q6_K | 5.15GB |
| [Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 43.55 |
| ARC (25-shot) | 53.67 |
| HellaSwag (10-shot) | 78.09 |
| MMLU (5-shot) | 45.63 |
| TruthfulQA (0-shot) | 41.72 |
| Winogrande (5-shot) | 73.56 |
| GSM8K (5-shot) | 5.61 |
| DROP (3-shot) | 6.6 |
|
RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf | RichardErkhov | 2024-05-30T16:12:40Z | 456 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T13:04:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SlimOpenOrca-Mistral-7B-v2 - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/SlimOpenOrca-Mistral-7B-v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SlimOpenOrca-Mistral-7B-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q2_K.gguf) | Q2_K | 2.53GB |
| [SlimOpenOrca-Mistral-7B-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [SlimOpenOrca-Mistral-7B-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [SlimOpenOrca-Mistral-7B-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [SlimOpenOrca-Mistral-7B-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [SlimOpenOrca-Mistral-7B-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q3_K.gguf) | Q3_K | 3.28GB |
| [SlimOpenOrca-Mistral-7B-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [SlimOpenOrca-Mistral-7B-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [SlimOpenOrca-Mistral-7B-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [SlimOpenOrca-Mistral-7B-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [SlimOpenOrca-Mistral-7B-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [SlimOpenOrca-Mistral-7B-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [SlimOpenOrca-Mistral-7B-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q4_K.gguf) | Q4_K | 4.07GB |
| [SlimOpenOrca-Mistral-7B-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [SlimOpenOrca-Mistral-7B-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [SlimOpenOrca-Mistral-7B-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q5_0.gguf) | Q5_0 | 4.65GB |
| [SlimOpenOrca-Mistral-7B-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [SlimOpenOrca-Mistral-7B-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q5_K.gguf) | Q5_K | 4.78GB |
| [SlimOpenOrca-Mistral-7B-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [SlimOpenOrca-Mistral-7B-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [SlimOpenOrca-Mistral-7B-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q6_K.gguf) | Q6_K | 5.53GB |
| [SlimOpenOrca-Mistral-7B-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SlimOpenOrca-Mistral-7B-v2-gguf/blob/main/SlimOpenOrca-Mistral-7B-v2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
---
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Merge of [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge.
### *Weights*
- [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca): 0.6
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.4
### *Density*
- [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 52.96 |
| ARC (25-shot) | 62.88 |
| HellaSwag (10-shot) | 83.41 |
| MMLU (5-shot) | 62.05 |
| TruthfulQA (0-shot) | 56.65 |
| Winogrande (5-shot) | 77.58 |
| GSM8K (5-shot) | 18.95 |
| DROP (3-shot) | 9.19 |
|
mradermacher/code-stral-7B-B-GGUF | mradermacher | 2024-06-03T18:37:48Z | 456 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:simplyinquisitive/code-stral-7B-B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-03T17:28:10Z | ---
base_model: simplyinquisitive/code-stral-7B-B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/simplyinquisitive/code-stral-7B-B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q2_K.gguf) | Q2_K | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.IQ3_XS.gguf) | IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q3_K_S.gguf) | Q3_K_S | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.IQ3_S.gguf) | IQ3_S | 8.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.IQ3_M.gguf) | IQ3_M | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q3_K_M.gguf) | Q3_K_M | 9.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q3_K_L.gguf) | Q3_K_L | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.IQ4_XS.gguf) | IQ4_XS | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q4_K_S.gguf) | Q4_K_S | 11.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q4_K_M.gguf) | Q4_K_M | 12.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q5_K_S.gguf) | Q5_K_S | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q5_K_M.gguf) | Q5_K_M | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q6_K.gguf) | Q6_K | 16.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/code-stral-7B-B-GGUF/resolve/main/code-stral-7B-B.Q8_0.gguf) | Q8_0 | 21.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ReadyON/Vecteus-V2-7B-gguf | ReadyON | 2024-06-21T06:11:21Z | 456 | 0 | null | [
"gguf",
"text-generation-inference",
"text-generation",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-16T06:42:19Z | ---
license: apache-2.0
language:
- en
- ja
pipeline_tag: text-generation
tags:
- text-generation-inference
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# Vecteus-V2-7B-gguf
gguf version of [Vecteus-V2-7B](https://huggingface.co/Local-Novel-LLM-project/Vecteus-V2-7B/). |
ReplaceHumanWithAI/model | ReplaceHumanWithAI | 2024-06-17T16:50:13Z | 456 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-06-17T16:36:21Z | Entry not found |
Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-Q4_K_M-GGUF | Tanvir1337 | 2024-06-25T21:46:59Z | 456 | 0 | null | [
"gguf",
"bangla",
"large language model",
"llama-cpp",
"gguf-my-repo",
"bn",
"en",
"dataset:wikimedia/wikipedia",
"base_model:BanglaLLM/BanglaLLama-3-8b-BnWiki-Base",
"license:llama3",
"region:us"
]
| null | 2024-06-25T21:46:31Z | ---
base_model: BanglaLLM/BanglaLLama-3-8b-BnWiki-Base
datasets:
- wikimedia/wikipedia
language:
- bn
- en
license: llama3
tags:
- bangla
- large language model
- llama-cpp
- gguf-my-repo
---
# Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-Q4_K_M-GGUF
This model was converted to GGUF format from [`BanglaLLM/BanglaLLama-3-8b-BnWiki-Base`](https://huggingface.co/BanglaLLM/BanglaLLama-3-8b-BnWiki-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BanglaLLM/BanglaLLama-3-8b-BnWiki-Base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-base-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-base-q4_k_m.gguf -c 2048
```
|
ClaudioItaly/Fimbulvetr-11B-v2.1-16K-Q4_K_S-GGUF | ClaudioItaly | 2024-06-30T13:14:36Z | 456 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Sao10K/Fimbulvetr-11B-v2.1-16K",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-06-30T13:14:11Z | ---
base_model: Sao10K/Fimbulvetr-11B-v2.1-16K
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# ClaudioItaly/Fimbulvetr-11B-v2.1-16K-Q4_K_S-GGUF
This model was converted to GGUF format from [`Sao10K/Fimbulvetr-11B-v2.1-16K`](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ClaudioItaly/Fimbulvetr-11B-v2.1-16K-Q4_K_S-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ClaudioItaly/Fimbulvetr-11B-v2.1-16K-Q4_K_S-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ClaudioItaly/Fimbulvetr-11B-v2.1-16K-Q4_K_S-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ClaudioItaly/Fimbulvetr-11B-v2.1-16K-Q4_K_S-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_k_s.gguf -c 2048
```
|
tobchef/Qwen2-7B-Q4_K_M-GGUF | tobchef | 2024-07-01T13:39:25Z | 456 | 0 | null | [
"gguf",
"pretrained",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2-7B",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-07-01T13:39:02Z | ---
base_model: Qwen/Qwen2-7B
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
- llama-cpp
- gguf-my-repo
---
# tobchef/Qwen2-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-7B`](https://huggingface.co/Qwen/Qwen2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tobchef/Qwen2-7B-Q4_K_M-GGUF --hf-file qwen2-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tobchef/Qwen2-7B-Q4_K_M-GGUF --hf-file qwen2-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tobchef/Qwen2-7B-Q4_K_M-GGUF --hf-file qwen2-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tobchef/Qwen2-7B-Q4_K_M-GGUF --hf-file qwen2-7b-q4_k_m.gguf -c 2048
```
|
Contrastive-Tension/BERT-Base-Swe-CT-STSb | Contrastive-Tension | 2021-05-18T17:51:43Z | 455 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:04Z | Entry not found |
NeelNanda/Attn-Only-2L512W-Shortformer-6B-big-lr | NeelNanda | 2022-10-29T16:09:43Z | 455 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-19T19:05:30Z | Entry not found |
roneneldan/TinyStories-Instruct-3M | roneneldan | 2023-05-17T22:07:34Z | 455 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-05-12T21:54:45Z | Entry not found |
KnutJaegersberg/RWKV-4-PilePlus-3B-20230520-3147-520Gtokens-ctx4096 | KnutJaegersberg | 2023-09-05T11:11:35Z | 455 | 0 | transformers | [
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-05T09:48:55Z | ---
license: apache-2.0
---
This is just a standard conversion to hf transformers format of models from here:
https://huggingface.co/BlinkDL/rwkv-4-pileplus
According to the documentation I found, this model should have seen roundabout roundabout 0.8 trillion tokens! |
TheBloke/13B-Legerdemain-L2-GGUF | TheBloke | 2023-09-27T12:47:38Z | 455 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:CalderaAI/13B-Legerdemain-L2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T14:15:33Z | ---
license: llama2
model_name: 13B Legerdemain L2
base_model: CalderaAI/13B-Legerdemain-L2
inference: false
model_creator: CalderaAI
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# 13B Legerdemain L2 - GGUF
- Model creator: [CalderaAI](https://huggingface.co/CalderaAI)
- Original model: [13B Legerdemain L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [CalderaAI's 13B Legerdemain L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/13B-Legerdemain-L2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF)
* [CalderaAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [13b-legerdemain-l2.Q2_K.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [13b-legerdemain-l2.Q3_K_S.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [13b-legerdemain-l2.Q3_K_M.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [13b-legerdemain-l2.Q3_K_L.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [13b-legerdemain-l2.Q4_0.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [13b-legerdemain-l2.Q4_K_S.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [13b-legerdemain-l2.Q4_K_M.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [13b-legerdemain-l2.Q5_0.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [13b-legerdemain-l2.Q5_K_S.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [13b-legerdemain-l2.Q5_K_M.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [13b-legerdemain-l2.Q6_K.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [13b-legerdemain-l2.Q8_0.gguf](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGUF/blob/main/13b-legerdemain-l2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/13B-Legerdemain-L2-GGUF and below it, a specific filename to download, such as: 13b-legerdemain-l2.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/13B-Legerdemain-L2-GGUF 13b-legerdemain-l2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/13B-Legerdemain-L2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/13B-Legerdemain-L2-GGUF 13b-legerdemain-l2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m 13b-legerdemain-l2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/13B-Legerdemain-L2-GGUF", model_file="13b-legerdemain-l2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: CalderaAI's 13B Legerdemain L2
## 13B-Legerdemain-L2
13B-Legerdemain-L2 is the first model merge of its kind in a series of LLaMaV2 models mixed using a custom script built in-house by CalderaAI called Model-REVOLVER.
M-REVOLVER is also the first in a series of custom scripts based on the concept of mixtuning - not only does the end user have contol over which models are mixed
and their percentages on a per-layer basis, we tackle the problem of overcomplexity that arises from such a level of control; this model is the first of its series.
## The Model-REVOLVER Process Designed by CalderaAI
M-REVOLVER (Rapid Evolution Via Optimized-List Viewer Evaluated Response)
Per-layer merging between parent models is a nebulous inexact science, and therefore impractical to most users despite the raw power it offers. We propose an
entirely new approach that gives the user a clear looking glass into the impact vastly different layer merge configurations between selected parent models of
their choice will have on the potential offspring model - especially its inherited behaviors. We've developed solution MK.1 - A cyclic random pattern search
in place that determines all layer merge ratios, combines test models, infers prompt completions, and deletes a prototype after data collection is saved.
When the cyclic system has completed its entire run, nothing is left but the telemetry collected along with the cycle and layer merge ratios from every
single prototype merge. This data is then used to empower the user to choose which offspring is most fit to their desired outcome. This final step is
only initiated when all necessary data has been aggregated from all assembled-tested-erased prototypes sampled in the search space.
From here, the user is provided five 300 token prompt completions from each and every offspring contender that was created and tested during the cyclic process.
The user simply browses each prototype's series of responses and selects their desired outcome model by entering the cycle number associated with the prompt
completions they feel best suits their vision. That model is then instantly repatriated into the official offspring of its parent models and tokenizer files
found to be most relevant are instantly auto-copied from the parent model dir to the offspring.
That's it - the user instantly has a complete model based on the behavior they decided on, suggested from one of many potentials; all with their own unique
trait inheritence thanks to layer merge auto randomization inside an ordered system. One more thing - the user not only selects how many cycles to run,
the user can edit prompts.txt which the system reads as a single prompt - this means if the user desires to use any multiline instruct format to observe
all potential model outcomes from instruct, or desires simply their own prompt, it's up to them.. simply works.
Link to GitHub for M-REVOLVER are at the end of the model card. More advanced MergeTech toolsets and merge techniques are currently under internal testing
and development by Caldera.
## 13B-Legerdemain-L2 Use
13B-Legerdemain-L2 is capable of following Alpaca instructions however it seems far more receptive to the by-the-book method as seen here:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
{New Line}
```
The primary model of choice for this model was a story-only model called Holodeck by KoboldAI. Traits preserved seem to be detailed descriptiveness, verbosity,
and characters with personality. The two other models selected were 13B-Nous-Hermes by NousResearch and 13B-orca-8k-3319 by OpenAssistant. I began the process by
providing an incredibly obscene prompt and simply ignored each and every guardrail or censorship laden prompt completion and accepted the offensive ones in turn -
intent wasn't to be crass but trigger censorship parts of the network to test if it's possible to completely undermine them. Second pass with offspring model and
Orca was a simple milquetoast prompt to gauge vocabulary, word flow, and intelligence as I selected the most fit in that category. Result model seems a bit of a
curiosity - different samplers and even a different UI (as I went from TGUI to KoboldAI) seem to uncover different facets of behavior. Godlike preset with Alpaca
Instruct in TGUI worked fine. In KoboldAI some tweaking was necessary to get the same experience. If you choose to test this model, have fun - it's got a mind of
its own.
Model-REVOLVER Git:
https://github.com/Digitous/ModelREVOLVER
<!-- original-model-card end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.