modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
automerger/Experiment27pasticheYamshadowexperiment28-7B | automerger | 2024-04-14T20:23:54Z | 398 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:automerger/YamshadowExperiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-14T20:22:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- automerger/YamshadowExperiment28-7B
---
# Experiment27pasticheYamshadowexperiment28-7B
Experiment27pasticheYamshadowexperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
## 🧩 Configuration
```yaml
models:
- model: automerger/Experiment27Pastiche-7B
# No parameters necessary for base model
- model: automerger/YamshadowExperiment28-7B
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: automerger/Experiment27Pastiche-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment27pasticheYamshadowexperiment28-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
lytang/MiniCheck-DeBERTa-v3-Large | lytang | 2024-06-20T06:39:23Z | 398 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2404.10774",
"arxiv:2111.09543",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-04-14T23:26:02Z | ---
language:
- en
pipeline_tag: text-classification
license: mit
---
# Model Summary
This is a fact-checking model from our work:
📃 [**MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents**](https://arxiv.org/pdf/2404.10774.pdf) ([GitHub Repo](https://github.com/Liyan06/MiniCheck))
The model is based on DeBERTa-v3-Large that predicts a binary label - 1 for supported and 0 for unsupported.
The model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine
whether the sentence is supported by the document: **MiniCheck-Model(document, claim) -> {0, 1}**
MiniCheck-DeBERTa-v3-Large is fine tuned from `microsoft/deberta-v3-large` ([He et al., 2023](https://arxiv.org/pdf/2111.09543.pdf))
on the combination of 35K data:
- 21K ANLI data ([Nie et al., 2020](https://aclanthology.org/2020.acl-main.441.pdf))
- 14K synthetic data generated from scratch in a structed way (more details in the paper).
### Model Variants
We also have other two MiniCheck model variants:
- [lytang/MiniCheck-Flan-T5-Large](https://huggingface.co/lytang/MiniCheck-Flan-T5-Large)
- [lytang/MiniCheck-RoBERTa-Large](https://huggingface.co/lytang/MiniCheck-RoBERTa-Large)
### Model Performance
<p align="center">
<img src="./cost-vs-bacc.png" width="360">
</p>
The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact),
from 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-DeBERTa-v3-Large outperform all
exisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large, which
is on par with GPT-4 but 400x cheaper. See full results in our work.
Note: We only evaluated the performance of our models on real claims -- without any human intervention in
any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect
LLMs' actual behaviors.
# Model Usage Demo
Please first clone our [GitHub Repo](https://github.com/Liyan06/MiniCheck) and install necessary packages from `requirements.txt`.
### Below is a simple use case
```python
from minicheck.minicheck import MiniCheck
doc = "A group of students gather in the school library to study for their upcoming final exams."
claim_1 = "The students are preparing for an examination."
claim_2 = "The students are on vacation."
# model_name can be one of ['roberta-large', 'deberta-v3-large', 'flan-t5-large']
scorer = MiniCheck(model_name='deberta-v3-large', device=f'cuda:0', cache_dir='./ckpts')
pred_label, raw_prob, _, _ = scorer.score(docs=[doc, doc], claims=[claim_1, claim_2])
print(pred_label) # [1, 0]
print(raw_prob) # [0.9786180257797241, 0.01138285268098116]
```
### Test on our [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact) Benchmark
```python
import pandas as pd
from datasets import load_dataset
from minicheck.minicheck import MiniCheck
# load 13K test data
df = pd.DataFrame(load_dataset("lytang/LLM-AggreFact")['test'])
docs = df.doc.values
claims = df.claim.values
scorer = MiniCheck(model_name='deberta-v3-large', device=f'cuda:0', cache_dir='./ckpts')
pred_label, raw_prob, _, _ = scorer.score(docs=docs, claims=claims) # ~ 15 mins, depending on hardware
```
To evalaute the result on the benchmark
```python
from sklearn.metrics import balanced_accuracy_score
df['preds'] = pred_label
result_df = pd.DataFrame(columns=['Dataset', 'BAcc'])
for dataset in df.dataset.unique():
sub_df = df[df.dataset == dataset]
bacc = balanced_accuracy_score(sub_df.label, sub_df.preds) * 100
result_df.loc[len(result_df)] = [dataset, bacc]
result_df.loc[len(result_df)] = ['Average', result_df.BAcc.mean()]
result_df.round(1)
```
# Citation
```
@misc{tang2024minicheck,
title={MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents},
author={Liyan Tang and Philippe Laban and Greg Durrett},
year={2024},
eprint={2404.10774},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
NotAiLOL/Boundary-Hermes-Chat-2x7B-MoE | NotAiLOL | 2024-04-18T08:48:56Z | 398 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"Nexusflow/Starling-LM-7B-beta",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:Nexusflow/Starling-LM-7B-beta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T08:21:39Z | ---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- NousResearch/Hermes-2-Pro-Mistral-7B
- Nexusflow/Starling-LM-7B-beta
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- Nexusflow/Starling-LM-7B-beta
---
# Boundary-Hermes-Chat-2x7B-MoE
Boundary-Hermes-Chat-2x7B-MoE is a Mixture of Experts (MoE) made with the following models:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
## 🧩 Configuration
```yaml
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: float16
gate_mode: cheap_embed
experts:
- source_model: NousResearch/Hermes-2-Pro-Mistral-7B
positive_prompts: ["You are a helpful general assistant."]
- source_model: Nexusflow/Starling-LM-7B-beta
positive_prompts: ["You are assistant for question and answering."]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "NotAiLOL/Boundary-Hermes-Chat-2x7B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
hi000000/insta_user1_llama2-koen | hi000000 | 2024-04-21T07:13:32Z | 398 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"music",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-21T05:51:33Z | ---
library_name: transformers
language:
- ko
- en
license: apache-2.0
tags:
- music
---
# Model Card for Model ID
### Model Description
- base model : beomi/llama-2-koen-13b
- dataset : crawling
## Model Details
## Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.\
- Developed by: hi000000
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Model type: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
- Finetuned from model [optional]: [More Information Needed]
[More Information Needed] |
netcat420/MFANN3bv0.5 | netcat420 | 2024-04-22T17:11:42Z | 398 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2024-04-21T09:10:20Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rupeshs/hyper-sd-sdxl-1-step | rupeshs | 2024-04-28T02:09:16Z | 398 | 3 | diffusers | [
"diffusers",
"safetensors",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-04-23T15:58:40Z | ---
license: openrail++
---
This is 1 step inference HyperSD SDXL model used with [FastSD CPU](https://github.com/rupeshs/fastsdcpu)
|
theo77186/Llama-3-70B-Instruct-norefusal | theo77186 | 2024-05-05T16:38:47Z | 398 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-05T16:08:46Z | ---
license: llama3
---
# Llama 3 70B Instruct no refusal
This is a model that uses the orthogonal feature ablation as featured in this
[paper](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction).
Calibration data:
- 256 prompts from [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2)
- 256 prompts from [AdvBench](https://github.com/llm-attacks/llm-attacks/blob/main/data/advbench/harmful_behaviors.csv)
- The direction is extracted between layer 40 and 41
I haven't tested the model but like the 8B model, may still refuse some instructions.
**Use this model responsibly, I decline any liability resulting of the use of this model.**
I will post the code later. |
saurav1199/adisesha-phi1.5-7-3-40000 | saurav1199 | 2024-05-08T12:18:46Z | 398 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"custom_code",
"arxiv:1910.09700",
"license:bigscience-openrail-m",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-05-08T11:56:48Z | ---
library_name: transformers
license: bigscience-openrail-m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/L3-8B-Stheno-v3.1-GGUF | mradermacher | 2024-05-22T06:54:52Z | 398 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/L3-8B-Stheno-v3.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-22T00:53:23Z | ---
base_model: Sao10K/L3-8B-Stheno-v3.1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Stheno-v3.1-GGUF/resolve/main/L3-8B-Stheno-v3.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/pair-preference-model-LLaMA3-8B-GGUF | QuantFactory | 2024-05-26T03:05:22Z | 398 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"conversational",
"text-generation",
"arxiv:2405.07863",
"base_model:RLHFlow/pair-preference-model-LLaMA3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-24T15:24:15Z | ---
license: llama3
base_model: RLHFlow/pair-preference-model-LLaMA3-8B
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- conversational
---
# pair-preference-model-LLaMA3-8B-GGUF
This is quantized version of [RLHFlow/pair-preference-model-LLaMA3-8B](https://huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B) created using llama.cpp
# Model Description
This preference model is trained from [LLaMA3-8B-it](meta-llama/Meta-Llama-3-8B-Instruct) with the training script at [Reward Modeling](https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/pm_dev/pair-pm).
The dataset is RLHFlow/pair_preference_model_dataset. It achieves Chat-98.6, Char-hard 65.8, Safety 89.6, and reasoning 94.9 in reward bench.
See our paper [RLHF Workflow: From Reward Modeling to Online RLHF](https://arxiv.org/abs/2405.07863) for more details of this model.
## Service the RM
Here is an example to use the Preference Model to rank a pair. For n>2 responses, it is recommened to use the tournament style ranking strategy to get the best response so that the complexity is linear in n.
```python
device = 0
model = AutoModelForCausalLM.from_pretrained(script_args.preference_name_or_path,
torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2").cuda()
tokenizer = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain.chat_template = "\n{% for message in messages %}{% if loop.index0 % 2 == 0 %}\n\n<turn> user\n {{ message['content'] }}{% else %}\n\n<turn> assistant\n {{ message['content'] }}{% endif %}{% endfor %}\n\n\n"
prompt_template = "[CONTEXT] {context} [RESPONSE A] {response_A} [RESPONSE B] {response_B} \n"
token_id_A = tokenizer.encode("A", add_special_tokens=False)
token_id_B = tokenizer.encode("B", add_special_tokens=False)
assert len(token_id_A) == 1 and len(token_id_B) == 1
token_id_A = token_id_A[0]
token_id_B = token_id_B[0]
temperature = 1.0
model.eval()
response_chosen = "BBBB"
response_rejected = "CCCC"
## We can also handle multi-turn conversation.
instruction = [{"role": "user", "content": ...},
{"role": "assistant", "content": ...},
{"role": "user", "content": ...},
]
context = tokenizer_plain.apply_chat_template(instruction, tokenize=False)
responses = [response_chosen, response_rejected]
probs_chosen = []
for chosen_position in [0, 1]:
# we swap order to mitigate position bias
response_A = responses[chosen_position]
response_B = responses[1 - chosen_position]
prompt = prompt_template.format(context=context, response_A=response_A, response_B=response_B)
message = [
{"role": "user", "content": prompt},
]
input_ids = tokenizer.encode(tokenizer.apply_chat_template(message, tokenize=False).replace(tokenizer.bos_token, ""), return_tensors='pt', add_special_tokens=False).cuda()
with torch.no_grad():
output = model(input_ids)
logit_A = output.logits[0, -1, token_id_A].item()
logit_B = output.logits[0, -1, token_id_B].item()
# take softmax to get the probability; using numpy
Z = np.exp(logit_A / temperature) + np.exp(logit_B / temperature)
logit_chosen = [logit_A, logit_B][chosen_position]
prob_chosen = np.exp(logit_chosen / temperature) / Z
probs_chosen.append(prob_chosen)
avg_prob_chosen = np.mean(probs_chosen)
correct = 0.5 if avg_prob_chosen == 0.5 else float(avg_prob_chosen > 0.5)
print(correct)
``` |
marulyanova/first-step-sb-right | marulyanova | 2024-06-06T14:53:49Z | 398 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-06T14:47:29Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### first_step_sb_right Dreambooth model trained by marulyanova with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
LennartKeller/longformer-gottbert-base-8192-aw512 | LennartKeller | 2023-04-28T09:15:37Z | 397 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: first
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first
This model is a fine-tuned version of [longformer-gottbert-base-8192-aw512-](https://huggingface.co/longformer-8192-aw512-gottbert-base) on the a 500 million token subset of the german parts of the OSCAR dataset.
It achieves the following results on the custom evaluation set:
- Loss: 1.4981
## Model description
The weights of the model are initialized from the german version of Roberta [gottbert-base](https://huggingface.co/uklfr/gottbert-base).
The local attention windows have a fixed size of 512 tokens across all layers.
The maximum sequence length is 8192.
## Intended uses & limitations
Longformer models enable processing long texts using a mixture of local attention on each subword token and task specific global attention on a subset of the tokens.
## Training and evaluation data
The [OSCAR](https://oscar-corpus.com) dataset is freely avaible corpus of filtered web texts from the Common Crawl in various languages. We used the 2017 version of the dataset.
## Training procedure
The model was trained with masked language modeling for 3 epochs on a customly created 500 million tokens subset of the german proportion of the [OSCAR](https://oscar-corpus.com) dataset.
It was validated using 5% of the original subset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5636 | 0.1 | 500 | 2.2399 |
| 2.0426 | 0.2 | 1000 | 1.8841 |
| 1.9653 | 0.3 | 1500 | 1.7807 |
| 1.9422 | 0.4 | 2000 | 1.7206 |
| 1.9323 | 0.49 | 2500 | 1.6800 |
| 1.7587 | 0.59 | 3000 | 1.6507 |
| 1.7239 | 0.69 | 3500 | 1.6316 |
| 1.7452 | 0.79 | 4000 | 1.6137 |
| 1.7415 | 0.89 | 4500 | 1.5983 |
| 1.7733 | 0.99 | 5000 | 1.5830 |
| 1.7656 | 1.09 | 5500 | 1.5735 |
| 1.6543 | 1.19 | 6000 | 1.5643 |
| 1.7131 | 1.28 | 6500 | 1.5546 |
| 1.6456 | 1.38 | 7000 | 1.5503 |
| 1.716 | 1.48 | 7500 | 1.5422 |
| 1.806 | 1.58 | 8000 | 1.5377 |
| 1.8407 | 1.68 | 8500 | 1.5327 |
| 1.6371 | 1.78 | 9000 | 1.5278 |
| 1.6453 | 1.88 | 9500 | 1.5231 |
| 1.7754 | 1.98 | 10000 | 1.5214 |
| 1.7695 | 2.08 | 10500 | 1.5165 |
| 1.7109 | 2.17 | 11000 | 1.5138 |
| 1.6992 | 2.27 | 11500 | 1.5107 |
| 1.6707 | 2.37 | 12000 | 1.5097 |
| 1.6835 | 2.47 | 12500 | 1.5040 |
| 1.7171 | 2.57 | 13000 | 1.5041 |
| 1.7257 | 2.67 | 13500 | 1.4990 |
| 1.6287 | 2.77 | 14000 | 1.5017 |
| 1.7737 | 2.87 | 14500 | 1.4983 |
| 1.4002 | 2.96 | 15000 | 1.4992 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lorahub/flan_t5_large-wiki_hop_original_choose_best_object_interrogative_1 | lorahub | 2023-07-24T09:41:29Z | 397 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T09:41:19Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-wiqa_which_of_the_following_is_the_supposed_perturbation | lorahub | 2023-07-24T10:03:39Z | 397 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:03:29Z | ---
library_name: peft
---
|
hogiahien/LoliV5-edited | hogiahien | 2023-08-15T02:15:25Z | 397 | 10 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-03T01:33:27Z | ---
duplicated_from: kebab111/LoliV5
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
i have no idea what i am doing |
timm/mobileone_s4.apple_in1k | timm | 2023-08-23T19:07:37Z | 397 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2206.04040",
"license:other",
"region:us"
]
| image-classification | 2023-08-23T19:07:25Z | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for mobileone_s4
A MobileOne image classification model. Trained on ImageNet-1k by paper authors.
Please observe [original license](https://github.com/apple/ml-mobileone/blob/b7f4e6d48884593c7eb46eedc53c3a097c09e957/LICENSE).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.0
- GMACs: 3.0
- Activations (M): 17.7
- Image size: 224 x 224
- **Papers:**
- MobileOne: An Improved One millisecond Mobile Backbone: https://arxiv.org/abs/2206.04040
- **Original:** https://github.com/apple/ml-mobileone
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobileone_s4', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobileone_s4',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 192, 56, 56])
# torch.Size([1, 448, 28, 28])
# torch.Size([1, 896, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobileone_s4',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{mobileone2022,
title={An Improved One millisecond Mobile Backbone},
author={Vasu, Pavan Kumar Anasosalu and Gabriel, James and Zhu, Jeff and Tuzel, Oncel and Ranjan, Anurag},
journal={arXiv preprint arXiv:2206.04040},
year={2022}
}
```
|
TheBloke/Pandalyst-7B-V1.1-GGUF | TheBloke | 2023-09-30T13:46:18Z | 397 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"code",
"en",
"base_model:pipizhao/Pandalyst-7B-V1.1",
"license:llama2",
"model-index",
"text-generation-inference",
"region:us"
]
| null | 2023-09-30T13:37:25Z | ---
base_model: pipizhao/Pandalyst-7B-V1.1
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: Pandalyst_7B_v1.1
results:
- metrics:
- name: exec@1
type: exec@1
value: 0.76
verified: false
task:
type: text-generation
model_creator: Yanzhao Zheng
model_name: Pandalyst 7B V1.1
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pandalyst 7B V1.1 - GGUF
- Model creator: [Yanzhao Zheng](https://huggingface.co/pipizhao)
- Original model: [Pandalyst 7B V1.1](https://huggingface.co/pipizhao/Pandalyst-7B-V1.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Yanzhao Zheng's Pandalyst 7B V1.1](https://huggingface.co/pipizhao/Pandalyst-7B-V1.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF)
* [Yanzhao Zheng's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pipizhao/Pandalyst-7B-V1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [pandalyst-7b-v1.1.Q2_K.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [pandalyst-7b-v1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [pandalyst-7b-v1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [pandalyst-7b-v1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [pandalyst-7b-v1.1.Q4_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pandalyst-7b-v1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [pandalyst-7b-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [pandalyst-7b-v1.1.Q5_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pandalyst-7b-v1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [pandalyst-7b-v1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [pandalyst-7b-v1.1.Q6_K.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [pandalyst-7b-v1.1.Q8_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-V1.1-GGUF/blob/main/pandalyst-7b-v1.1.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Pandalyst-7B-V1.1-GGUF and below it, a specific filename to download, such as: pandalyst-7b-v1.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Pandalyst-7B-V1.1-GGUF pandalyst-7b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Pandalyst-7B-V1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pandalyst-7B-V1.1-GGUF pandalyst-7b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m pandalyst-7b-v1.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Pandalyst-7B-V1.1-GGUF", model_file="pandalyst-7b-v1.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Yanzhao Zheng's Pandalyst 7B V1.1
## Pandalyst: A large language model for mastering data analysis using pandas
<p align="center">
<img src="https://raw.githubusercontent.com/zhengyanzhao1997/Pandalyst/master/imgs/pandalyst.png" width="300"/>
</p>
<p align="center">
🐱 <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github Repo</a> <br>
</p>
**What is Pandalyst**
- Pandalyst is a general large language model specifically trained to process and analyze data using the pandas library.
**How is Pandalyst**
- Pandalyst has strong generalization capabilities for data tables in different fields and different data analysis needs.
**Why is Pandalyst**
- Pandalyst is open source and free to use, and its small parameter size (7B/13B) allows us to easily deploy it on local PC.
- Pandalyst can handle complex data tables (multiple columns and multiple rows), allowing us to enter enough context to describe our table in detail.
- Pandalyst has very competitive performance, significantly outperforming models of the same size and even outperforming some of the strongest closed-source models.
## News
- 🔥[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **Pandalyst-13B-V1.0**, **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
- 🔥[2023/09/28] We released **Pandalyst-13B-V1.0** , which was trained on **WizardCoder-Python-13B-V1.0** and achieves the **70.7 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
| Model | Checkpoint | Base Model | PandaTest_V1.0 | EASY | HARD | License |
|--------------------|---------------------------------------------------------------------------------------------|------------|----------------|---------------------|---------------------| ----- |
| Pandalyst-13B-V1.0 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst_13B_V1.0" target="_blank">HF Link</a> | WizardCoder-Python-13B-V1.0 | 70.7 | 75.6 | 65.9 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| Pandalyst-7B-V1.1 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst-7B-V1.1" target="_blank">HF Link</a> | CodeLlama-7b-Python | 76.1 | 85.2 | 67.0 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
## Usage and Human evaluation
Please refer to <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github</a>.
<!-- original-model-card end -->
|
audreyt/Taiwan-LLM-7B-v2.1-chat-GGUF | audreyt | 2023-11-17T15:46:33Z | 397 | 6 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-11-17T15:36:59Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
inference: false
quantized_by: audreyt
---
# Taiwan-LLM-7B-v2.1-chat-GGUF - GGUF
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat)
## Description
This repo contains GGUF format model files for Yen-Ting Lin's [ Taiwan LLM based on LLaMa2-13b](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat).
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author.
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者。
## About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- footer start -->
<!-- footer end -->
# Original model card
---
# 🌟 Checkout New [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Taiwan LLM based on Mistral-7B-v0.1
continue pretraining on 20 billion tokens in traditional Mandarin and instruction fine-tuning on millions of conversations.
This version does NOT include CommonCrawl.
# Collaboration with Ubitus K.K. 💪💪💪
本項目與 Ubitus K.K. 合作進行。Ubitus 為本項目提供寶貴的技術支持和計算資源。
Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable technical support and compute resources for the project.
|
llava-hf/vip-llava-13b-hf | llava-hf | 2024-06-28T12:23:10Z | 397 | 8 | transformers | [
"transformers",
"safetensors",
"vipllava",
"pretraining",
"image-to-text",
"en",
"arxiv:2312.00784",
"region:us"
]
| image-to-text | 2023-12-14T12:16:26Z | ---
language:
- en
pipeline_tag: image-to-text
inference: false
arxiv: 2304.08485
---
# VipLLaVA Model Card

Below is the model card of VipLlava model 13b, which is copied from the original Llava model card that you can find [here](https://huggingface.co/liuhaotian/llava-v1.5-13b).
Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance (the model works similarly as Llava): [](https://colab.research.google.com/drive/1-0G7Kuj2iQgKux4NJneP2JefFMamxG6Q?usp=sharing)
Or check out our Spaces demo! [](https://huggingface.co/spaces/llava-hf/llava-4bit)
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Vip-LlaVa enhances the training protocol of Llava by marking images and interact with the model using natural cues like a
“red bounding box” or “pointed arrow” during training.
**Model date:**
ViP-LLaVa was released in December 2023.
**Paper or resources for more information:**
https://vip-llava.github.io/
## How to use the model
First, make sure to have `transformers >= 4.35.3`.
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template and add the token `<image>` to the location where you want to query images:
According to the official code base, it is recommeneded to use this template:
```bash
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n<prompt>###Assistant:
```
Where `<prompt>` denotes the prompt asked by the user
### Using `pipeline`:
```python
from transformers import pipeline
from PIL import Image
import requests
model_id = "llava-hf/vip-llava-13b-hf"
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
prompt = f"A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n{question}###Assistant:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
```
### Using pure `transformers`:
Below is an example script to run generation in `float16` precision on a GPU device:
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, VipLlavaForConditionalGeneration
model_id = "llava-hf/vip-llava-13b-hf"
question = "What are these?"
prompt = f"A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n{question}###Assistant:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = VipLlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
### Model optimization
#### 4-bit quantization through `bitsandbytes` library
First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
```diff
model = VipLlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ load_in_4bit=True
)
```
#### Use Flash-Attention 2 to further speed-up generation
First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
```diff
model = VipLlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ use_flash_attention_2=True
).to(0)
```
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Citation
To cite this work please use
```bibtex
@misc{cai2023making,
title={Making Large Multimodal Models Understand Arbitrary Visual Prompts},
author={Mu Cai and Haotian Liu and Siva Karthik Mustikovela and Gregory P. Meyer and Yuning Chai and Dennis Park and Yong Jae Lee},
year={2023},
eprint={2312.00784},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
hooman650/bge-m3-onnx-o4 | hooman650 | 2024-02-06T04:25:23Z | 397 | 4 | transformers | [
"transformers",
"onnx",
"xlm-roberta",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| feature-extraction | 2024-02-06T04:17:42Z | ---
license: mit
pipeline_tag: feature-extraction
---
# bge-m3-onnx-o4
This is `bge-m3-onnx-o4` weights of the original [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3). Why is this model cool?
- [x] Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- [x] Multi-Linguality: It can support more than **100** working languages.
- [x] Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to **8192** tokens.
## Usage
### IMPORTANT - DOWNLOAD MODEL WEIGHTS
Please see the instructions below.
1. **Download** the checkpoint: For some reason you cannot directly load from this online version (you will get an exception).
Please download this repo as below:
```python
# pip install huggingface-hub
from huggingface_hub import snapshot_download
snapshot_download(repo_id="hooman650/bge-m3-onnx-o4",local_dir="bge-m3-onnx")
```
### Dense Retrieval
```
# for cuda
pip install --upgrade-strategy eager optimum[onnxruntime]
```
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
import torch
# Make sure that you download the model weights locally to `bge-m3-onnx`
model = ORTModelForFeatureExtraction.from_pretrained("bge-m3-onnx", provider="CUDAExecutionProvider") # omit provider for CPU usage.
tokenizer = AutoTokenizer.from_pretrained("hooman650/bge-m3-onnx-o4")
sentences = [
"English: The quick brown fox jumps over the lazy dog.",
"Spanish: El rápido zorro marrón salta sobre el perro perezoso.",
"French: Le renard brun rapide saute par-dessus le chien paresseux.",
"German: Der schnelle braune Fuchs springt über den faulen Hund.",
"Italian: La volpe marrone veloce salta sopra il cane pigro.",
"Japanese: 速い茶色の狐が怠惰な犬を飛び越える。",
"Chinese (Simplified): 快速的棕色狐狸跳过懒狗。",
"Russian: Быстрая коричневая лиса прыгает через ленивую собаку.",
"Arabic: الثعلب البني السريع يقفز فوق الكلب الكسول.",
"Hindi: तेज़ भूरी लोमड़ी आलसी कुत्ते के ऊपर कूद जाती है।"
]
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt').to("cuda")
# Get the embeddings
out=model(**encoded_input,return_dict=True).last_hidden_state
# normalize the embeddings
dense_vecs = torch.nn.functional.normalize(out[:, 0], dim=-1)
```
### Multi-Vector (ColBERT)
`coming soon...`
|
neuralmagic/Llama-2-7b-pruned50-retrained | neuralmagic | 2024-05-07T15:25:25Z | 397 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"sparse",
"dataset:cerebras/SlimPajama-627B",
"arxiv:2301.00774",
"arxiv:2405.03594",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1907.10641",
"arxiv:1911.01547",
"arxiv:2109.07958",
"arxiv:2110.14168",
"arxiv:2107.03374",
"base_model:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-15T15:44:40Z | ---
base_model: meta-llama/Llama-2-7b-hf
inference: true
model_type: llama
pipeline_tag: text-generation
datasets:
- cerebras/SlimPajama-627B
tags:
- sparse
---
# Llama-2-7b-pruned50-retrained
This repo contains model files for a [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) model that has had 50% of the parameters pruned in one-shot with [SparseGPT](https://arxiv.org/abs/2301.00774), then retrained by [Cerebras](https://huggingface.co/cerebras) with 45B tokens from SlimPajama while maintaining sparsity.
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
**Authors**: Neural Magic, Cerebras
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer).
### Running the model
This model has not been fine-tuned for instruction-following but may be run with the transformers library. For accelerated inference with sparsity, deploy with [nm-vllm](https://github.com/neuralmagic/nm-vllm) or [deepsparse](https://github.com/neuralmagic/deepsparse).
```python
# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
## Evaluation Benchmark Results
Model evaluation metrics and results.
| Benchmark | Metric | Llama-2-7b | Llama-2-7b-pruned50-retrained |
|------------------------------------------------|---------------|-------------|-------------------------------|
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot | 46.9% | 41.3% |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 78.6% | 76.5% |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 74.0% | 72.1% |
| [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 53.1% | 49.8% |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | 38.8% | 37.7% |
| [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 14.5% | 9.17% |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 13.4% | 14.7% |
## Model Training Details
Coming soon.
## Help
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF | mradermacher | 2024-05-06T04:57:56Z | 397 | 0 | transformers | [
"transformers",
"gguf",
"llama-2",
"self-instruct",
"distillation",
"synthetic instruction",
"en",
"base_model:NousResearch/Nous-Hermes-Llama2-70b",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-13T18:27:11Z | ---
base_model: NousResearch/Nous-Hermes-Llama2-70b
language:
- en
library_name: transformers
license:
- mit
quantized_by: mradermacher
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sambanovasystems/SambaLingo-Thai-Chat-70B | sambanovasystems | 2024-04-17T00:43:09Z | 397 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"th",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"arxiv:2404.05829",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T20:06:57Z | ---
language:
- th
- en
license: llama2
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/cai-conversation-harmless
---
# SambaLingo-Thai-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model [SambaLingo-Thai-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Thai, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
ประเทศไทยช่วงเช้าเคารพธงชาติเมื่อไร</s>
<|assistant|>
ในประเทศไทย เวลาเคารพธงชาติคือเวลา 08.00 น. และ 18.00 น. ทุกวัน ประชาชนจะยืนตรงและร้องเพลงชาติในช่วงเวลาเหล่านี้เพื่อเป็นสัญลักษณ์ของความรักชาติและความเคารพต่อประเทศ
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
IAFrance/ECE-TW3-JRGL-VHF6 | IAFrance | 2024-04-15T20:52:37Z | 397 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T20:35:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- MTSAIR/MultiVerse_70B
- abacusai/Smaug-72B-v0.1
---
# ECE-TW3-JRGL-VHF6
ECE-TW3-JRGL-VHF6 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration |
hexinran09/xr_dat_test | hexinran09 | 2024-04-18T12:13:57Z | 397 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-17T07:47:29Z | ---
license: apache-2.0
---
DAT Metho test
Experiment set checkpoint-542
知识蒸馏的基本概念
传统概念
大型、复杂的模型(Teacher模型)将其知识传递给更小、更高效的模型(student),这一过程帮助减少模型部署时所需的资源,使得模型能够在资源受限的环境中运行,例如移动设备或边缘计算平台。
大模型背景下
在大型语言模型(LLMs)的背景下,知识蒸馏的目的不仅仅是简化模型结构,而且还包括提取和迁移模型的深层次知识和理解能力。这种知识不仅限于模型的直接输出,还包括其决策过程、推理模式和认知策略。通过设计精确的提示(prompts),可以从LLMs中提取特定领域的知识,从而使学生模型不仅能够模仿教师模型的答案,还能学习其推理过程。
数据增强(DA)和知识蒸馏(KD)的关系
知识蒸馏(KD)和数据增强(DA)在提升小型模型性能方面相辅相成。简单来说,KD是一种让小型模型(学生)从大型模型(教师)学习的过程,而DA则是扩充数据集的方法,通过生成新的、多样化的训练样本来提高模型的泛化能力。在这个过程中,DA帮助生成更多的训练数据,这些数据随后通过KD过程使得学生模型能更好地学习教师模型的行为和决策方式。通过结合这两种技术,可以制作出既小巧又强大的模型,有效提升其性能和应用范围。 |
R136a1/BeyondInfinity-v2-4x7B | R136a1 | 2024-05-16T13:48:38Z | 397 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-17T11:35:09Z | ---
language:
- en
license: apache-2.0
tags:
- safetensors
- mixtral
---
I prefer this one instead of v1 since it's a bit more creative and _smart_, understand the story better. This use some different models from the v1 but perform very close to it (I guess since I used the same model for the base?). And yeah, I'm using this as daily model for RP (llama 3 is smart but not creative enough for RP, I think because lack of RP and _ERP_ data in their training set? idk.)
Testing done.
It performs really well in complex scenario and follows the character card quite well. The char card and previous message can affect a lot to the next reply style.
The main idea is instead of _merging_ models to create new model, I try to put these best model into mixtral so it can work together. And the result is good, every model has its uniqueness and strength.
Downside? it only support 8k (8192) context length...
Alpaca prompting format. |
Ppoyaa/Lumina-4 | Ppoyaa | 2024-05-02T12:43:27Z | 397 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T11:46:12Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
---
# Lumina-RP
Lumina-4 is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing).
This model has improved roleplaying and storytelling from [Lumina-3.5](https://huggingface.co/Ppoyaa/Lumina-3.5) while still retaining its strength. A slight improvement on the Open LLM leaderboard.
It uses a context window of up to 32k.
# Response Examples
``` Math
Input: 25 - 4 * 2 + 3 = ?
------------------------------------
Output: The given mathematical expression is 25 - 4 * 2 + 3. To solve it, we should follow the order of operations (PEMDAS):
1. Parentheses: There are none in this equation.
2. Exponents: None present.
3. Multiplication and Division: We perform multiplications before divisions from left to right. In this case, 4 * 2 = 8. Now our equation becomes 25 - 8 + 3.
4. Addition and Subtraction: We perform additions before subtractions from left to right. Here, we have a subtraction then an addition: 25 - 8 = 17, then 17 + 3 = 20.
So the result of the expression 25 - 4 * 2 + 3 is 20.
```
# 🏆 Open LLM Leaderboard Evaluation Results
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.59|
|AI2 Reasoning Challenge (25-Shot)|72.61|
|HellaSwag (10-Shot) |88.45|
|MMLU (5-Shot) |64.73|
|TruthfulQA (0-shot) |74.24|
|Winogrande (5-shot) |83.90|
|GSM8k (5-shot) |69.60|
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/Lumina-4"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ehristoforu/Gistral-16B | ehristoforu | 2024-04-21T20:16:42Z | 397 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"gistral",
"gistral-16b",
"multilingual",
"code",
"128k",
"metamath",
"grok-1",
"anthropic",
"openhermes",
"instruct",
"merge",
"en",
"fr",
"ru",
"de",
"ja",
"ko",
"zh",
"it",
"uk",
"dataset:HuggingFaceH4/grok-conversation-harmless",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized_fixed",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"dataset:meta-math/MetaMathQA",
"dataset:emozilla/yarn-train-tokenized-16k-mistral",
"dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/Code-Feedback",
"dataset:teknium/openhermes",
"dataset:lksy/ru_instruct_gpt4",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"base_model:Gaivoronsky/Mistral-7B-Saiga",
"base_model:snorkelai/Snorkel-Mistral-PairRM-DPO",
"base_model:OpenBuddy/openbuddy-mistral2-7b-v20.3-32k",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:HuggingFaceH4/mistral-7b-grok",
"base_model:HuggingFaceH4/mistral-7b-anthropic",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:ajibawa-2023/Code-Mistral-7B",
"base_model:SherlockAssistant/Mistral-7B-Instruct-Ukrainian",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-21T12:51:47Z | ---
base_model:
- Gaivoronsky/Mistral-7B-Saiga
- snorkelai/Snorkel-Mistral-PairRM-DPO
- OpenBuddy/openbuddy-mistral2-7b-v20.3-32k
- meta-math/MetaMath-Mistral-7B
- HuggingFaceH4/mistral-7b-grok
- HuggingFaceH4/mistral-7b-anthropic
- NousResearch/Yarn-Mistral-7b-128k
- ajibawa-2023/Code-Mistral-7B
- SherlockAssistant/Mistral-7B-Instruct-Ukrainian
datasets:
- HuggingFaceH4/grok-conversation-harmless
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized_fixed
- HuggingFaceH4/cai-conversation-harmless
- meta-math/MetaMathQA
- emozilla/yarn-train-tokenized-16k-mistral
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- teknium/openhermes
- lksy/ru_instruct_gpt4
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
library_name: transformers
tags:
- mistral
- gistral
- gistral-16b
- multilingual
- code
- 128k
- metamath
- grok-1
- anthropic
- openhermes
- instruct
- merge
language:
- en
- fr
- ru
- de
- ja
- ko
- zh
- it
- uk
- multilingual
- code
pipeline_tag: text-generation
license: apache-2.0
---
# Gistral 16B (Mistral from 7B to 16B)

We created a model from other cool models to combine everything into one cool model.
**GGUF Version:** [ehristoforu/Gistral-16B-Q4_K_M-GGUF](https://huggingface.co/ehristoforu/Gistral-16B-Q4_K_M-GGUF)
## Model Details
### Model Description
- **Developed by:** [@ehristoforu](https://huggingface.co/ehristoforu)
- **Model type:** Text Generation (conversational)
- **Language(s) (NLP):** English, French, Russian, German, Japanese, Chinese, Korean, Italian, Ukrainian, Code
- **Finetuned from model:** [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## How to Get Started with the Model
Use the code below to get started with the model.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "ehristoforu/Gistral-16B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## About merge
Base model: mistralai/Mistral-7B-Instruct-v0.2
Merge models:
- Gaivoronsky/Mistral-7B-Saiga
- snorkelai/Snorkel-Mistral-PairRM-DPO
- OpenBuddy/openbuddy-mistral2-7b-v20.3-32k
- meta-math/MetaMath-Mistral-7B
- HuggingFaceH4/mistral-7b-grok
- HuggingFaceH4/mistral-7b-anthropic
- NousResearch/Yarn-Mistral-7b-128k
- ajibawa-2023/Code-Mistral-7B
- SherlockAssistant/Mistral-7B-Instruct-Ukrainian
Merge datasets:
- HuggingFaceH4/grok-conversation-harmless
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized_fixed
- HuggingFaceH4/cai-conversation-harmless
- meta-math/MetaMathQA
- emozilla/yarn-train-tokenized-16k-mistral
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- teknium/openhermes
- lksy/ru_instruct_gpt4
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch |
TeeZee/GALAXY-16B-v1.0 | TeeZee | 2024-04-24T01:22:46Z | 397 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:Open-Orca/SlimOrca",
"dataset:MinervaAI/Aesir-Preview",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-22T19:45:41Z | ---
language:
- en
license: apache-2.0
tags:
- not-for-all-audiences
datasets:
- Intel/orca_dpo_pairs
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
- Open-Orca/SlimOrca
- MinervaAI/Aesir-Preview
- allenai/ultrafeedback_binarized_cleaned
---
# GALAXY-16B-v1.0

## Technical notes
- 72 layers,DUS procedure, mistral(32)->SOLAR(48)->GALAXY(72)
- 16B parameters
- model created as an extension of depth upscaling procedure used for SOLAR by upstage
## Results
- model can and will produce NSFW content
- waiting for eval results
## Prompt template
- Alpaca
- chat template is embedded in tokenizer config, should load automatically
## Context size
- 4096
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
|
ChuGyouk/Llama-3-11.5B-modified-DUS-nocpt | ChuGyouk | 2024-04-23T15:59:35Z | 397 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-23T15:14:24Z | ---
base_model:
- meta-llama/Meta-Llama-3-8B
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 19]
model:
model:
path: meta-llama/Meta-Llama-3-8B
- sources:
- layer_range: [3, 32]
model:
model:
path: meta-llama/Meta-Llama-3-8B
``` |
timm/vit_little_patch16_reg4_gap_256.sbb_in1k | timm | 2024-05-27T16:04:41Z | 397 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2309.16588",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2024-05-10T23:55:51Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for vit_little_patch16_reg4_gap_256.sbb_in1k
A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the architecture with registers, global average pooling.
There are a number of models in the lower end of model scales that originate in `timm`:
| variant | width | mlp width (mult) | heads | depth | timm orig |
| ------- | ----- | ---------------- | ----- | ----- | ---- |
| tiny | 192 | 768 (4) | 3 | 12 | n |
| wee | 256 | 1280 (5) | 4 | 14 | y |
| pwee | 256 | 1280 (5) | 4 | 16 (parallel) | y |
| small | 384 | 1536 (4) | 6 | 12 | n |
| little | 320 | 1792 (5.6) | 5 | 14 | y |
| medium | 512 | 2048 (4) | 8 | 12 | y |
| mediumd | 512 | 2048 (4) | 8 | 20 | y |
| betwixt | 640 | 2560 (4) | 10 | 12 | y |
| base | 768 | 3072 (4) | 12 | 12 | n |
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Searching for better baselines. Influced by Swin/DeiT/DeiT-III but w/ increased weight decay, moderate (in12k) to high (in1k) augmentation. Layer-decay used for fine-tune. Some runs used BCE and/or NAdamW instead of AdamW.
* See [train_hparams.yaml](./train_hparams.yaml) for specifics of each model.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.5
- GMACs: 5.7
- Activations (M): 12.4
- Image size: 256 x 256
- **Papers:**
- Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_little_patch16_reg4_gap_256.sbb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_little_patch16_reg4_gap_256.sbb_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 320, 16, 16])
# torch.Size([1, 320, 16, 16])
# torch.Size([1, 320, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_little_patch16_reg4_gap_256.sbb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 260, 320) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
| model | top1 | top5 | param_count | img_size |
| -------------------------------------------------- | ------ | ------ | ----------- | -------- |
| [vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 86.202 | 97.874 | 64.11 | 256 |
| [vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 85.418 | 97.480 | 60.4 | 256 |
| [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 84.930 | 97.386 | 38.88 | 256 |
| [vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k) | 84.322 | 96.812 | 63.95 | 256 |
| [vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k) | 83.906 | 96.684 | 60.23 | 256 |
| [vit_base_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_base_patch16_rope_reg1_gap_256.sbb_in1k) | 83.866 | 96.67 | 86.43 | 256 |
| [vit_medium_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_rope_reg1_gap_256.sbb_in1k) | 83.81 | 96.824 | 38.74 | 256 |
| [vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k) | 83.774 | 96.972 | 22.52 | 256 |
| [vit_betwixt_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k) | 83.706 | 96.616 | 60.4 | 256 |
| [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 83.628 | 96.544 | 60.4 | 256 |
| [vit_medium_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in1k) | 83.47 | 96.622 | 38.88 | 256 |
| [vit_medium_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg1_gap_256.sbb_in1k) | 83.462 | 96.548 | 38.88 | 256 |
| [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 82.514 | 96.262 | 22.52 | 256 |
| [vit_wee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_wee_patch16_reg1_gap_256.sbb_in1k) | 80.258 | 95.360 | 13.42 | 256 |
| [vit_pwee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_pwee_patch16_reg1_gap_256.sbb_in1k) | 80.072 | 95.136 | 15.25 | 256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{darcet2023vision,
title={Vision Transformers Need Registers},
author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv preprint arXiv:2309.16588},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
|
Kukedlc/NeuralLLaMa-3-8b-ORPO-v0.2 | Kukedlc | 2024-05-11T22:52:26Z | 397 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-11T16:31:13Z | ---
library_name: transformers
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/scb10x_-_typhoon-7b-gguf | RichardErkhov | 2024-05-12T12:58:43Z | 397 | 0 | null | [
"gguf",
"arxiv:2312.13951",
"region:us"
]
| null | 2024-05-12T11:07:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
typhoon-7b - GGUF
- Model creator: https://huggingface.co/scb10x/
- Original model: https://huggingface.co/scb10x/typhoon-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [typhoon-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q2_K.gguf) | Q2_K | 2.55GB |
| [typhoon-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ3_XS.gguf) | IQ3_XS | 2.83GB |
| [typhoon-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ3_S.gguf) | IQ3_S | 2.98GB |
| [typhoon-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K_S.gguf) | Q3_K_S | 2.96GB |
| [typhoon-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ3_M.gguf) | IQ3_M | 3.07GB |
| [typhoon-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K.gguf) | Q3_K | 3.29GB |
| [typhoon-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K_M.gguf) | Q3_K_M | 3.29GB |
| [typhoon-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K_L.gguf) | Q3_K_L | 3.57GB |
| [typhoon-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ4_XS.gguf) | IQ4_XS | 3.69GB |
| [typhoon-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_0.gguf) | Q4_0 | 3.84GB |
| [typhoon-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ4_NL.gguf) | IQ4_NL | 3.89GB |
| [typhoon-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_K_S.gguf) | Q4_K_S | 3.87GB |
| [typhoon-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_K.gguf) | Q4_K | 4.09GB |
| [typhoon-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_K_M.gguf) | Q4_K_M | 4.09GB |
| [typhoon-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_1.gguf) | Q4_1 | 4.26GB |
| [typhoon-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_0.gguf) | Q5_0 | 4.67GB |
| [typhoon-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_K_S.gguf) | Q5_K_S | 4.67GB |
| [typhoon-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_K.gguf) | Q5_K | 4.8GB |
| [typhoon-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_K_M.gguf) | Q5_K_M | 4.8GB |
| [typhoon-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_1.gguf) | Q5_1 | 5.09GB |
| [typhoon-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q6_K.gguf) | Q6_K | 5.55GB |
Original model description:
---
license: apache-2.0
language:
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- pretrained
---
# Typhoon-7B: Thai Large Language Model (Pretrained)
**Typhoon-7B** is a *pretrained* Thai 🇹🇭 large language model with 7 billion parameters, and it is based on Mistral-7B.
**Typhoon-7B** outperforms all open-source Thai language models at the time of writing as evaluated on Thai examination benchmarks, and its instruction-tuned variant achieves the best results in instruction-following tasks. Also, its performance in Thai is on par with GPT-3.5 while being 2.62 times more efficient in tokenizing Thai text.
**This is not an instruction-tuned model** - It may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
The Instruct model (chat-model) will be released soon. The beta version register is open at https://opentyphoon.ai/ or follow us for future model release https://twitter.com/opentyphoon.
<div align="center">
<img src="https://storage.googleapis.com/scb10x-ai-lab-public/assets/typhoon_benchmark.png" alt="Typhoon benchmark" width="100%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</div>
For full details of this model, please read our [paper](https://arxiv.org/abs/2312.13951).
## Model Description
- **Model type**: A 7B pretrained decoder-only model
- **Requirement**: transformers 4.34.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **License**: Apache-2.0 (Commercial)
## Performance on Thai Benchmark
| **Model** | **ONET** | **IC** | **TGAT** | **TPAT-1** | **A-Level** |
|---------------------|----------|--------|----------|------------|-------------|
| Typhoon-7B | 0.379 | 0.393 | 0.700 | 0.414 | 0.324 |
| SeaLLM-7B | 0.342 | 0.256 | 0.589 | 0.336 | 0.305 |
| OpenThaiGPT-beta-7B | 0.180 | 0.278 | 0.411 | 0.319 | 0.243 |
| WangChanGLM | 0.192 | 0.271 | 0.167 | 0.172 | 0.175 |
| SEA-LION-7B | 0.179 | 0.290 | 0.244 | 0.198 | 0.175 |
| Avg. Human | 0.318 | - | 0.472 | 0.406 | - |
## Intended Uses & Limitations
This model is a pretrained base model. Thus, it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
## Follow us
https://twitter.com/opentyphoon
## Support / Ask any question
https://discord.gg/CqyBscMFpg
## SCB10X AI Team
- Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon-7B useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
title={Typhoon: Thai Large Language Models},
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
year={2023},
journal={arXiv preprint arXiv:2312.13951},
url={https://arxiv.org/abs/2312.13951}
}
```
## Contact Us
- General & Collaboration: [email protected], [email protected]
- Technical: [email protected]
|
ChaoticNeutrals/Puppy_Purpose_0.69 | ChaoticNeutrals | 2024-05-14T22:13:44Z | 397 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:jeiku/OrthoPoppy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-12T23:14:20Z | ---
base_model:
- jeiku/OrthoPoppy
- ResplendentAI/NoWarning_Llama3
- jeiku/UnPoppy_8B
- ResplendentAI/Smarts_Llama3
- ResplendentAI/Aura_Uncensored_l3_8B
- ResplendentAI/Theory_of_Mind_Llama3
- jeiku/UnPoppy_8B
library_name: transformers
license: apache-2.0
---
# Puppy Purpose 0.69

Say hello to your puppy princess, she is pawsitively pleased to play with you!
A combination of model merges and lora merges using my signature datasets. I'm not too sure how this one will turn out, I made it for my own usage, but it should serve others well too. This model is compatible with our Chaotic Neutrals Llama3 mmproj files. Good luck and have fun! |
mradermacher/Sailor-14B-GGUF | mradermacher | 2024-05-17T00:51:19Z | 397 | 0 | transformers | [
"transformers",
"gguf",
"multilingual",
"sea",
"sailor",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:cerebras/SlimPajama-627B",
"dataset:Skywork/SkyPile-150B",
"dataset:allenai/MADLAD-400",
"dataset:cc100",
"base_model:sail/Sailor-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-17T00:00:20Z | ---
base_model: sail/Sailor-14B
datasets:
- cerebras/SlimPajama-627B
- Skywork/SkyPile-150B
- allenai/MADLAD-400
- cc100
language:
- en
- zh
- id
- th
- vi
- ms
- lo
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multilingual
- sea
- sailor
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/sail/Sailor-14B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q2_K.gguf) | Q2_K | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.IQ3_XS.gguf) | IQ3_XS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.IQ3_S.gguf) | IQ3_S | 6.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q3_K_S.gguf) | Q3_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.IQ3_M.gguf) | IQ3_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.IQ4_XS.gguf) | IQ4_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q4_K_M.gguf) | Q4_K_M | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q5_K_S.gguf) | Q5_K_S | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q6_K.gguf) | Q6_K | 12.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor-14B-GGUF/resolve/main/Sailor-14B.Q8_0.gguf) | Q8_0 | 15.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Alsebay/L3-test | Alsebay | 2024-05-19T06:05:29Z | 397 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-19T05:15:56Z | ---
license: cc-by-nc-4.0
---
Well, nothing to much, test model with 1 epoch, old dataset.
around 130 row? or it ~150k row? I don't remember `(*>﹏<*)′
This is my first L3 test, with bigger Dataset novels, maybe it will not lead good model, I don't know, since OpenLLM LeaderBoard is freeze now.
1/4 series of L3 |
mradermacher/HolyYi-9B-GGUF | mradermacher | 2024-05-19T16:57:51Z | 397 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:nbeerbower/bible-dpo",
"base_model:nbeerbower/HolyYi-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-19T15:47:08Z | ---
base_model: nbeerbower/HolyYi-9B
datasets:
- nbeerbower/bible-dpo
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/HolyYi-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/HolyYi-9B-GGUF/resolve/main/HolyYi-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf | RichardErkhov | 2024-05-20T06:03:07Z | 397 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-20T01:28:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CodeLlama-13B-Instruct-fp16 - GGUF
- Model creator: https://huggingface.co/TheBloke/
- Original model: https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-fp16/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CodeLlama-13B-Instruct-fp16.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q2_K.gguf) | Q2_K | 4.52GB |
| [CodeLlama-13B-Instruct-fp16.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [CodeLlama-13B-Instruct-fp16.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [CodeLlama-13B-Instruct-fp16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [CodeLlama-13B-Instruct-fp16.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [CodeLlama-13B-Instruct-fp16.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K.gguf) | Q3_K | 5.9GB |
| [CodeLlama-13B-Instruct-fp16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [CodeLlama-13B-Instruct-fp16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [CodeLlama-13B-Instruct-fp16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [CodeLlama-13B-Instruct-fp16.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_0.gguf) | Q4_0 | 6.86GB |
| [CodeLlama-13B-Instruct-fp16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [CodeLlama-13B-Instruct-fp16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [CodeLlama-13B-Instruct-fp16.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_K.gguf) | Q4_K | 7.33GB |
| [CodeLlama-13B-Instruct-fp16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [CodeLlama-13B-Instruct-fp16.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q4_1.gguf) | Q4_1 | 7.61GB |
| [CodeLlama-13B-Instruct-fp16.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_0.gguf) | Q5_0 | 8.36GB |
| [CodeLlama-13B-Instruct-fp16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [CodeLlama-13B-Instruct-fp16.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_K.gguf) | Q5_K | 8.6GB |
| [CodeLlama-13B-Instruct-fp16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [CodeLlama-13B-Instruct-fp16.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q5_1.gguf) | Q5_1 | 9.1GB |
| [CodeLlama-13B-Instruct-fp16.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q6_K.gguf) | Q6_K | 9.95GB |
| [CodeLlama-13B-Instruct-fp16.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-13B-Instruct-fp16-gguf/blob/main/CodeLlama-13B-Instruct-fp16.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
license: llama2
tags:
- llama-2
- codellama
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CodeLlama 13B-Instruct fp16
- Model creator: [Meta](https://ai.meta.com/llama/)
## Description
This is Transformers/HF format fp16 weights for CodeLlama 13B-Instruct. It is the result of downloading CodeLlama 13B-Instruct from [Meta](https://ai.meta.com/blog/code-llama-large-language-model-coding/) and converting to HF using `convert_llama_weights_to_hf.py`.
Quantisations will be coming shortly.
Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True`
Credit to @emozilla for creating the necessary modelling code to achieve this!
## Prompt template: TBC
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card
# Code Llama
## **Model Details**
**Model Developers** Meta AI
**Variations** Code Llama comes in three model sizes, and three variants:
1) Code Llama: our base models designed for general code synthesis and understanding
2) Code Llama - Python: designed specifically for Python
3) Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**Input** Models input text only.
**Output** Models output text only.
**Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All models were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
**Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)).
## **Intended Use**
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## **Hardware and Software**
**Training Factors**
We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
**Training data**
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
Code Llama - Instruct uses additional instruction fine-tuning data.
**Evaluation Results**
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## **Ethical Considerations and Limitations**
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf | RichardErkhov | 2024-05-20T03:24:51Z | 397 | 0 | null | [
"gguf",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"region:us"
]
| null | 2024-05-20T02:05:20Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
WizardLM-2-7B - GGUF
- Model creator: https://huggingface.co/dreamgen/
- Original model: https://huggingface.co/dreamgen/WizardLM-2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [WizardLM-2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [WizardLM-2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [WizardLM-2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [WizardLM-2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [WizardLM-2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [WizardLM-2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [WizardLM-2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [WizardLM-2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [WizardLM-2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [WizardLM-2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [WizardLM-2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [WizardLM-2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [WizardLM-2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [WizardLM-2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [WizardLM-2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [WizardLM-2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [WizardLM-2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [WizardLM-2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [WizardLM-2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [WizardLM-2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [WizardLM-2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [WizardLM-2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
---
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
cloudyu/Llama-3-70Bx2-MOE | cloudyu | 2024-05-20T07:11:25Z | 397 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-20T06:29:47Z | ---
license: llama3
---
This is my first MOE model based on Llama-3 70B.
Metrics are NOT ready.
example output
```
<|begin_of_text|>write me a story about peppa pig. Peppa Pig was a very busy pig. She had a lot of things to do today. She needed to get ready for a very important event, the annual mud festival. This year, she was going to be the main judge of the muddy puddle contest. She was very excited and couldn't wait to see all the muddy puddles that everyone would bring. But first, she needed to get ready. She took a bath, put on her favorite dress, and did her hair. When she was all ready, she headed downstairs to the kitchen, where her mom was making breakfast. She had made Peppa's favorite, pancakes with strawberries and whipped cream. Peppa was so happy and sat down to eat her breakfast. After she finished eating, she headed out the door to go to the festival. When she arrived, she saw all of her friends waiting for her. There was Suzy Sheep, Danny Dog, and Emily Elephant. They all said hello and Peppa was so happy to see them. The festival was about to start and Peppa took her place at the judges table. The first contestant was Suzy Sheep and she had brought a puddle that was very muddy and squelchy. Peppa loved it and gave it a score of 9 out of 10. Next was Danny Dog and he had brought a puddle that was even bigger and muddier than Suzy's. Peppa was very impressed and gave it a score of 9.5 out of 10. Then it was Emily Elephant's turn and she had brought a puddle that was so muddy, it looked like a swamp. Peppa was amazed and gave it a score of 10 out of 10. After all the contestants had presented their puddles, Peppa had to make a very difficult decision. She had to choose the winner of the muddy puddle contest. She thought for a long time and finally, she made her decision. The winner was Emily Elephant. Everyone cheered and clapped for Emily and Peppa gave her a big hug and a trophy. The festival was a huge success and Peppa was so happy to have been a part of it. She couldn't wait to do it all again next year. The end.<|eot_id|> [end of text]
``` |
ukung/Mistral-INDO-7B-GGUF | ukung | 2024-05-25T11:39:48Z | 397 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-25T10:23:37Z | Entry not found |
mradermacher/llama-3-8B-rpg-v4-GGUF | mradermacher | 2024-05-27T01:13:59Z | 397 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:osouza/llama-3-8B-rpg-v4",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-27T00:14:59Z | ---
base_model: osouza/llama-3-8B-rpg-v4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/osouza/llama-3-8B-rpg-v4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8B-rpg-v4-GGUF/resolve/main/llama-3-8B-rpg-v4.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/nisten_-_shqiponja-59b-v1-gguf | RichardErkhov | 2024-06-15T21:17:13Z | 397 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-15T08:02:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
shqiponja-59b-v1 - GGUF
- Model creator: https://huggingface.co/nisten/
- Original model: https://huggingface.co/nisten/shqiponja-59b-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [shqiponja-59b-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q2_K.gguf) | Q2_K | 20.34GB |
| [shqiponja-59b-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.IQ3_XS.gguf) | IQ3_XS | 22.59GB |
| [shqiponja-59b-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.IQ3_S.gguf) | IQ3_S | 23.85GB |
| [shqiponja-59b-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q3_K_S.gguf) | Q3_K_S | 23.76GB |
| [shqiponja-59b-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.IQ3_M.gguf) | IQ3_M | 24.75GB |
| [shqiponja-59b-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q3_K.gguf) | Q3_K | 26.51GB |
| [shqiponja-59b-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q3_K_M.gguf) | Q3_K_M | 26.51GB |
| [shqiponja-59b-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q3_K_L.gguf) | Q3_K_L | 28.89GB |
| [shqiponja-59b-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.IQ4_XS.gguf) | IQ4_XS | 29.68GB |
| [shqiponja-59b-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q4_0.gguf) | Q4_0 | 30.99GB |
| [shqiponja-59b-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.IQ4_NL.gguf) | IQ4_NL | 31.3GB |
| [shqiponja-59b-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q4_K_S.gguf) | Q4_K_S | 31.22GB |
| [shqiponja-59b-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q4_K.gguf) | Q4_K | 32.91GB |
| [shqiponja-59b-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q4_K_M.gguf) | Q4_K_M | 32.91GB |
| [shqiponja-59b-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/blob/main/shqiponja-59b-v1.Q4_1.gguf) | Q4_1 | 34.39GB |
| [shqiponja-59b-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/tree/main/) | Q5_0 | 37.8GB |
| [shqiponja-59b-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/tree/main/) | Q5_K_S | 37.8GB |
| [shqiponja-59b-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/tree/main/) | Q5_K | 38.79GB |
| [shqiponja-59b-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/tree/main/) | Q5_K_M | 38.79GB |
| [shqiponja-59b-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/tree/main/) | Q5_1 | 41.2GB |
| [shqiponja-59b-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/tree/main/) | Q6_K | 45.03GB |
| [shqiponja-59b-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/nisten_-_shqiponja-59b-v1-gguf/tree/main/) | Q8_0 | 58.32GB |
Original model description:
---
base_model:
- jondurbin/nontoxic-bagel-34b-v0.2
tags:
- mergekit
- frankenstein
- merge
license: mit
---
# Shqiponja-59 V1

This is an untrained experimental 59B merged model.
Picked these two models specifically to compliment each others strengths.
### Models Merged
* NousResearch/Nous-Hermes-2-Yi-34B
* jondurbin/nontoxic-bagel-34b-v0.2
Merged using the Undi95 style passthrough merge method.
### The secret sauce
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 52]
model: /home/admin/nv1/nontoxic-bagel-34b-v0.2
- sources:
- layer_range: [8, 60]
model: /home/admin/nv1/Nous-Hermes-2-Yi-34B
```
# License MIT - Enjoy
|
siacus/llama-2-7b-cap-Q4_K_M.gguf | siacus | 2024-06-16T03:24:06Z | 397 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-16T02:41:21Z | Entry not found |
igans/Mistral-7B-v0.3-Q4_K_M-GGUF | igans | 2024-06-24T16:47:47Z | 397 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-24T16:47:29Z | ---
base_model: mistralai/Mistral-7B-v0.3
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# igans/Mistral-7B-v0.3-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo igans/Mistral-7B-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo igans/Mistral-7B-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-v0.3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo igans/Mistral-7B-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo igans/Mistral-7B-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-v0.3-q4_k_m.gguf -c 2048
```
|
RichardErkhov/erfanzar_-_LLamaStory-70M-gguf | RichardErkhov | 2024-06-29T15:59:46Z | 397 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-29T15:51:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LLamaStory-70M - GGUF
- Model creator: https://huggingface.co/erfanzar/
- Original model: https://huggingface.co/erfanzar/LLamaStory-70M/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LLamaStory-70M.Q2_K.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q2_K.gguf) | Q2_K | 0.03GB |
| [LLamaStory-70M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.IQ3_XS.gguf) | IQ3_XS | 0.03GB |
| [LLamaStory-70M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.IQ3_S.gguf) | IQ3_S | 0.03GB |
| [LLamaStory-70M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q3_K_S.gguf) | Q3_K_S | 0.03GB |
| [LLamaStory-70M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.IQ3_M.gguf) | IQ3_M | 0.04GB |
| [LLamaStory-70M.Q3_K.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q3_K.gguf) | Q3_K | 0.04GB |
| [LLamaStory-70M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q3_K_M.gguf) | Q3_K_M | 0.04GB |
| [LLamaStory-70M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q3_K_L.gguf) | Q3_K_L | 0.04GB |
| [LLamaStory-70M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.IQ4_XS.gguf) | IQ4_XS | 0.04GB |
| [LLamaStory-70M.Q4_0.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q4_0.gguf) | Q4_0 | 0.04GB |
| [LLamaStory-70M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.IQ4_NL.gguf) | IQ4_NL | 0.04GB |
| [LLamaStory-70M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q4_K_S.gguf) | Q4_K_S | 0.04GB |
| [LLamaStory-70M.Q4_K.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q4_K.gguf) | Q4_K | 0.04GB |
| [LLamaStory-70M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q4_K_M.gguf) | Q4_K_M | 0.04GB |
| [LLamaStory-70M.Q4_1.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q4_1.gguf) | Q4_1 | 0.04GB |
| [LLamaStory-70M.Q5_0.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q5_0.gguf) | Q5_0 | 0.05GB |
| [LLamaStory-70M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q5_K_S.gguf) | Q5_K_S | 0.05GB |
| [LLamaStory-70M.Q5_K.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q5_K.gguf) | Q5_K | 0.05GB |
| [LLamaStory-70M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q5_K_M.gguf) | Q5_K_M | 0.05GB |
| [LLamaStory-70M.Q5_1.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q5_1.gguf) | Q5_1 | 0.05GB |
| [LLamaStory-70M.Q6_K.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q6_K.gguf) | Q6_K | 0.05GB |
| [LLamaStory-70M.Q8_0.gguf](https://huggingface.co/RichardErkhov/erfanzar_-_LLamaStory-70M-gguf/blob/main/LLamaStory-70M.Q8_0.gguf) | Q8_0 | 0.07GB |
Original model description:
---
license: mit
datasets:
- qwedsacf/story-generation
language:
- en
---
*LLamaStory-70M* is a LLama Model Pre-trained on a story-generation dataset
About Training:
- EasyDel Platform Used
- TPU-v4
- batch-size 2048
- max positioning embedding 512
- 12 Epochs (yet)
this model will be used to Debug 4 and 8 bit training and inference in JAX and Rust with EasyDel
|
faizalnf1800/MeinaMix-V10-FP16-SD1.5 | faizalnf1800 | 2024-06-30T05:37:06Z | 397 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-30T05:33:56Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
google/pegasus-pubmed | google | 2023-01-24T16:42:41Z | 396 | 7 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
timm/resnetaa50d.sw_in12k_ft_in1k | timm | 2024-02-10T23:40:32Z | 396 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1904.11486",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-05T18:42:46Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnetaa50d.sw_in12k_ft_in1k
A ResNet-D (Rectangle-2 Anti-Aliasing) image classification model.
This model features:
* ReLU activations
* 3-layer stem of 3x3 convolutions with pooling
* 2x2 average pool + 1x1 convolution shortcut downsample
Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman in `timm` using recipe template described below.
Recipe details:
* Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes)
* AdamW optimizer, gradient clipping, EMA weight averaging
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 25.6
- GMACs: 5.4
- Activations (M): 12.4
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- Making Convolutional Networks Shift-Invariant Again: https://arxiv.org/abs/1904.11486
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnetaa50d.sw_in12k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetaa50d.sw_in12k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetaa50d.sw_in12k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{zhang2019shiftinvar,
title={Making Convolutional Networks Shift-Invariant Again},
author={Zhang, Richard},
booktitle={ICML},
year={2019}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
|
timm/crossvit_small_240.in1k | timm | 2023-04-24T00:35:53Z | 396 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.14899",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-24T00:35:24Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for crossvit_small_240.in1k
A CrossViT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 26.9
- GMACs: 5.6
- Activations (M): 18.2
- Image size: 240 x 240
- **Papers:**
- CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification: https://arxiv.org/abs/2103.14899
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/IBM/CrossViT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('crossvit_small_240.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'crossvit_small_240.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (torch.Size([1, 401, 192]), torch.Size([1, 197, 384])) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{
chen2021crossvit,
title={{CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification}},
author={Chun-Fu (Richard) Chen and Quanfu Fan and Rameswar Panda},
booktitle={International Conference on Computer Vision (ICCV)},
year={2021}
}
```
|
Salesforce/instructcodet5p-16b | Salesforce | 2023-08-03T09:44:37Z | 396 | 57 | transformers | [
"transformers",
"pytorch",
"codet5p",
"text2text-generation",
"custom_code",
"arxiv:2305.07922",
"license:bsd-3-clause",
"autotrain_compatible",
"region:us"
]
| text2text-generation | 2023-05-16T13:28:22Z | ---
license: bsd-3-clause
---
# InstructCodeT5+ 16B
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `AutoModelForSeq2SeqLM` functionality and employs the same tokenizer as [CodeGen](https://github.com/salesforce/CodeGen).
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "Salesforce/instructcodet5p-16b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
trust_remote_code=True).to(device)
encoding = tokenizer("def print_hello_world():", return_tensors="pt").to(device)
encoding['decoder_input_ids'] = encoding['input_ids'].clone()
outputs = model.generate(**encoding, max_length=15)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is initialized from off-the-shelf LLMs, i.e. its encoder is initialized from [CodeGen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) and its decoder is initialized from [CodeGen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono).
It is trained on the unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation.
Finally, we apply instruction tuning to align it with natural language instructions following [Code Alpaca](https://github.com/sahil280114/codealpaca).
Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` |
lorahub/flan_t5_large-amazon_polarity_User_recommend_this_product | lorahub | 2023-07-24T09:45:39Z | 396 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T09:45:28Z | ---
library_name: peft
---
|
timm/fastvit_sa36.apple_dist_in1k | timm | 2023-08-23T21:05:40Z | 396 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2303.14189",
"license:other",
"region:us"
]
| image-classification | 2023-08-23T21:05:15Z | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for fastvit_sa36.apple_dist_in1k
A FastViT image classification model. Trained on ImageNet-1k with distillation by paper authors.
Please observe [original license](https://github.com/apple/ml-fastvit/blob/8af5928238cab99c45f64fc3e4e7b1516b8224ba/LICENSE).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 31.5
- GMACs: 5.6
- Activations (M): 34.0
- Image size: 256 x 256
- **Papers:**
- FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization: https://arxiv.org/abs/2303.14189
- **Original:** https://github.com/apple/ml-fastvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('fastvit_sa36.apple_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_sa36.apple_dist_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 64, 64])
# torch.Size([1, 128, 32, 32])
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 512, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fastvit_sa36.apple_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{vasufastvit2023,
author = {Pavan Kumar Anasosalu Vasu and James Gabriel and Jeff Zhu and Oncel Tuzel and Anurag Ranjan},
title = {FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year = {2023}
}
```
|
ionet-official/bc8-alpha | ionet-official | 2023-11-03T13:32:04Z | 396 | 2 | diffusers | [
"diffusers",
"clip",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-25T22:27:04Z | ---
license: apache-2.0
---
|
Pristinenlp/alime-reranker-large-zh | Pristinenlp | 2023-12-01T11:32:41Z | 396 | 8 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"mteb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-12-01T08:25:43Z | ---
tags:
- mteb
model-index:
- name: alime-reranker-large-zh
results:
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 82.32176162633382
- type: mrr
value: 84.91440476190478
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 84.08586457179406
- type: mrr
value: 86.9011507936508
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 35.497382125464284
- type: mrr
value: 35.29206349206349
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 68.25849742148222
- type: mrr
value: 78.64202157956387
---
# alime-reranker-large-zh
The alime reranker model.
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
pairs = [["西湖在哪?", "西湖风景名胜区位于浙江省杭州市"],["今天天气不错","你吓死我了"]]
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained("Pristinenlp/alime-reranker-large-zh")
model = AutoModelForSequenceClassification.from_pretrained("Pristinenlp/alime-reranker-large-zh").to(device)
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512).to(device)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores.tolist())
``` |
Imran1/MedChat3.5 | Imran1 | 2024-02-03T05:39:25Z | 396 | 3 | transformers, Unsloth, Peft, trl, accelerate, bitsandbytes | [
"transformers, Unsloth, Peft, trl, accelerate, bitsandbytes",
"safetensors",
"mistral",
"medical",
"language model",
"NLP",
"license:mit",
"region:us"
]
| null | 2024-01-17T05:55:41Z | ---
library_name: transformers, Unsloth, Peft, trl, accelerate, bitsandbytes
tags:
- medical
- language model
- NLP
license: mit
---
# Model Card for MedChat3.5
## Model Details
### Model Description
MedChat3.5 is a specialized language model based on the OpenChat 3.5 architecture, fine-tuned for biomedical natural language processing (NLP) tasks. The model has been tailored using the Llama2-MedTuned-Instructions dataset, which includes approximately 200,000 samples specifically designed for instruction-based learning in biomedical contexts. The model excels in tasks such as Named Entity Recognition (NER), Relation Extraction (RE), Medical Natural Language Inference (NLI), Document Classification, and Question Answering (QA).
- **Developed by:** Imran Ullah
- **Model type:** Language Model (LM), fine-tuned for medical NLP
- **Language(s) (NLP):** English (Biomedical Text)
- **License:** [MIT]
- **Finetuned from model [optional]:** OpenChat 3.5
## Dataset Information
### Dataset Name: Llama2-MedTuned-Instructions
#### Dataset Description
Llama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. Comprising approximately 200,000 samples, the dataset guides models through tasks like Named Entity Recognition (NER), Relation Extraction (RE), Medical Natural Language Inference (NLI), Document Classification, and Question Answering (QA). It consolidates subsets from well-known biomedical datasets, ensuring a diverse and comprehensive training experience.
#### Source Datasets and Composition
- Named Entity Recognition (NER): NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, i2b2-2012
- Relation Extraction (RE): i2b2-2010, GAD
- Natural Language Inference (NLI): MedNLI
- Document Classification: Hallmarks of cancer (HoC)
- Question Answering (QA): ChatDoctor, PMC-Llama-Instructions
#### Prompting Strategy
Each sample in the dataset follows a three-part structure: Instruction, Input, and Output, facilitating instruction-based learning.
#### Usage and Application
Ideal for training and evaluating models on biomedical NLP tasks, MedChat3.5 serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT.
## Inference Instructions
To use MedChat3.5 for inference, follow the provided code snippet using the `transformers` library. Make sure to install the necessary packages and authenticate using an Hugging Face API token. Adjust parameters like temperature, top-p, and top-k for desired generation behavior. The model is optimized for tasks such as question answering and generating responses in biomedical contexts.
```python
# Example Inference Code
!pip install -q --upgrade git+https://github.com/huggingface/transformers.git
!pip install -q accelerate datasets bitsandbytes peft
# user your own hugging face secret token
from google.colab import userdata
hf_token = userdata.get('HF_TOKEN')
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
from transformers import AutoTokenizer, SinkCache, AutoModelForCausalLM, TextStreamer
path = "Imran1/MedChat3.5"
# Load base LLM model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
path,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
load_in_4bit=True,
token=hf_token,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(path, token=hf_token)
tokenizer.eos_token_id = model.config.eos_token_id
tokenizer.pad_token = tokenizer.eos_token
streamer = TextStreamer(tokenizer)
tx = '''
GPT4 Correct Assistant: you are a stomach specialist.<|end_of_turn|>
GPT4 Correct User: What role does gastric acid play in the process of digestion, and how does the stomach regulate its secretion to maintain a healthy digestive environment?<|end_of_turn|>
GPT4 Correct Assistant:
'''
import warnings
warnings.filterwarnings('ignore') # Ignore all warnings
inputs = tokenizer(tx, return_tensors="pt", return_attention_mask=False).to('cuda')
generation_params = {
'max_new_tokens': 500,
'use_cache': True,
'do_sample': True,
'temperature': 0.7,
'top_p': 0.9,
'top_k': 50
}
outputs = model.generate(**inputs, **generation_params, streamer=streamer)
decoded_outputs = tokenizer.batch_decode(outputs)
# output
'''
<s>
GPT4 Correct Assistant: you are stomach specialist.<|end_of_turn|>
GPT4 Correct User: What role does gastric acid play in the process of digestion, and how does the stomach regulate its secretion to maintain a healthy digestive environment?<|end_of_turn|>
GPT4 Correct Assistant:
Gastric acid plays a crucial role in the process of digestion by breaking down food into its basic components. It is secreted by the cells lining the stomach, known as parietal cells, in response to the presence of food in the stomach.
The stomach regulates the secretion of gastric acid through a series of mechanisms that maintain a healthy digestive environment. The primary mechanism is the release of gastrin, a hormone produced by the stomach's G-cells in response to the presence of food. Gastrin stimulates the parietal cells to secrete gastric acid, which in turn aids in the breakdown of food.
The stomach also regulates the secretion of gastric acid through the release of histamine, which is produced by the ECL cells in response to the presence of food. Histamine acts on the parietal cells to stimulate gastric acid secretion.
Another mechanism involves the production of intrinsic factor, a protein produced by the stomach's mucous cells. Intrinsic factor is essential for the absorption of vitamin B12 in the small intestine. The production of intrinsic factor is regulated by gastric acid, which helps maintain a healthy balance of this essential nutrient.
Additionally, the stomach regulates the secretion of gastric acid through the release of somatostatin, a hormone produced by the D-cells of the stomach. Somatostatin inhibits gastric acid secretion, helping to maintain a healthy balance between acid production and neutralization.
In summary, the stomach regulates the secretion of gastric acid through a series of mechanisms that maintain a healthy digestive environment. These mechanisms include the release of gastrin, histamine, and intrinsic factor, as well as the release of somatostatin. By maintaining a balance between acid production and neutralization, the stomach ensures that the digestive environment remains conducive to proper digestion and absorption of nutrients.<|end_of_turn|>
'''
``` |
Artefact2/KuroMitsu-11B-GGUF | Artefact2 | 2024-01-27T12:30:01Z | 396 | 0 | null | [
"gguf",
"en",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-01-27T08:16:45Z | ---
license: cc-by-nc-4.0
language:
- en
---
These are GGUF quantized versions of [Himitsui/KuroMitsu-11B](https://huggingface.co/Himitsui/KuroMitsu-11B).
The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. |
sai17/cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-v2_more_Data | sai17 | 2024-02-20T07:19:25Z | 396 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-19T09:28:24Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-v2_more_Data
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5927874941959449
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-v2_more_Data
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0009
- Accuracy: 0.5928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.559 | 1.0 | 1362 | 1.3402 | 0.4189 |
| 1.5165 | 2.0 | 2725 | 1.2308 | 0.4647 |
| 1.484 | 3.0 | 4087 | 1.1676 | 0.4954 |
| 1.5037 | 4.0 | 5450 | 1.1206 | 0.5198 |
| 1.4489 | 5.0 | 6812 | 1.1162 | 0.5284 |
| 1.4335 | 6.0 | 8175 | 1.1395 | 0.5047 |
| 1.4281 | 7.0 | 9537 | 1.0606 | 0.5445 |
| 1.4219 | 8.0 | 10900 | 1.0754 | 0.5408 |
| 1.3935 | 9.0 | 12262 | 1.0285 | 0.5604 |
| 1.3542 | 10.0 | 13625 | 1.0497 | 0.5453 |
| 1.3761 | 11.0 | 14987 | 1.0535 | 0.5450 |
| 1.3824 | 12.0 | 16350 | 1.0268 | 0.5591 |
| 1.3709 | 13.0 | 17712 | 1.0015 | 0.5690 |
| 1.3361 | 14.0 | 19075 | 1.0266 | 0.5595 |
| 1.3673 | 15.0 | 20437 | 0.9988 | 0.5772 |
| 1.376 | 16.0 | 21800 | 0.9950 | 0.5744 |
| 1.3486 | 17.0 | 23162 | 0.9837 | 0.5784 |
| 1.3333 | 18.0 | 24525 | 0.9771 | 0.5827 |
| 1.347 | 19.0 | 25887 | 0.9895 | 0.5770 |
| 1.3381 | 20.0 | 27250 | 0.9709 | 0.5820 |
| 1.3385 | 21.0 | 28612 | 0.9704 | 0.5833 |
| 1.336 | 22.0 | 29975 | 0.9646 | 0.5885 |
| 1.3372 | 23.0 | 31337 | 0.9653 | 0.5879 |
| 1.2979 | 24.0 | 32700 | 0.9867 | 0.5814 |
| 1.2948 | 25.0 | 34062 | 0.9633 | 0.5870 |
| 1.2767 | 26.0 | 35425 | 0.9578 | 0.5877 |
| 1.3012 | 27.0 | 36787 | 0.9709 | 0.5867 |
| 1.2667 | 28.0 | 38150 | 0.9648 | 0.5899 |
| 1.3 | 29.0 | 39512 | 0.9560 | 0.5930 |
| 1.2735 | 30.0 | 40875 | 0.9595 | 0.5949 |
| 1.2895 | 31.0 | 42237 | 0.9851 | 0.5809 |
| 1.2234 | 32.0 | 43600 | 0.9601 | 0.5931 |
| 1.2212 | 33.0 | 44962 | 0.9800 | 0.5917 |
| 1.2483 | 34.0 | 46325 | 0.9662 | 0.5982 |
| 1.2507 | 35.0 | 47687 | 0.9657 | 0.5910 |
| 1.2539 | 36.0 | 49050 | 0.9954 | 0.5783 |
| 1.2491 | 37.0 | 50412 | 0.9718 | 0.5924 |
| 1.2397 | 38.0 | 51775 | 0.9769 | 0.5930 |
| 1.1903 | 39.0 | 53137 | 0.9717 | 0.5945 |
| 1.2475 | 40.0 | 54500 | 0.9995 | 0.5855 |
| 1.2371 | 41.0 | 55862 | 0.9861 | 0.5935 |
| 1.2561 | 42.0 | 57225 | 0.9856 | 0.5958 |
| 1.2069 | 43.0 | 58587 | 0.9913 | 0.5892 |
| 1.2188 | 44.0 | 59950 | 0.9902 | 0.5950 |
| 1.1732 | 45.0 | 61312 | 0.9892 | 0.5949 |
| 1.1705 | 46.0 | 62675 | 0.9991 | 0.5914 |
| 1.18 | 47.0 | 64037 | 0.9952 | 0.5925 |
| 1.2353 | 48.0 | 65400 | 0.9999 | 0.5933 |
| 1.2057 | 49.0 | 66762 | 1.0001 | 0.5920 |
| 1.1833 | 49.98 | 68100 | 1.0009 | 0.5928 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
KatyTheCutie/LemonadeRP-Testing | KatyTheCutie | 2024-03-09T05:58:29Z | 396 | 4 | null | [
"gguf",
"region:us"
]
| null | 2024-02-22T19:29:00Z | ```
Enter RP mode. You shall reply to {{user}} while staying in character. Your responses must be detailed, creative, immersive, and drive the scenario forward, write one short paragraph. You will follow {{char}}'s persona.
Be descriptive and immersive, providing vivid details about {{char}}'s actions, emotions, and the environment. Write with a high degree of complexity and burstiness. Do not repeat this message.
``` |
mradermacher/maid-yuzu-v8-alter-i1-GGUF | mradermacher | 2024-05-06T06:14:18Z | 396 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:rhplus0831/maid-yuzu-v8-alter",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-13T17:49:41Z | ---
base_model: rhplus0831/maid-yuzu-v8-alter
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
weighted/imatrix quants of https://huggingface.co/rhplus0831/maid-yuzu-v8-alter
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/maid-yuzu-v8-alter-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ1_S.gguf) | i1-IQ1_S | 10.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/maid-yuzu-v8-alter-i1-GGUF/resolve/main/maid-yuzu-v8-alter.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adowu/astral-demo-4 | adowu | 2024-04-10T05:09:12Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"astral",
"demo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-10T02:31:25Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- astral
- demo
- mistral
---
### astral-demo-4
## Overview
astral-demo-4 is a streamlined language model designed for quick demonstrations and insights into NLP capabilities, focusing on text generation and analysis.
## Features
- Efficient Text Generation: Quickly produces text for a variety of applications.
- Compact and Fast: Optimized for speed, making it ideal for demos and prototyping.
- Prototype Development: Tests ideas in conversational AI and content generation.
## Performance
Balances performance with accuracy, providing a practical demonstration of NLP technology in action.
- **Developed by:** aww
- **Model type:** Mistral |
saucam/Pyrhea-72B | saucam | 2024-04-13T19:04:21Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"davidkim205/Rhea-72b-v0.5",
"abacusai/Smaug-72B-v0.1",
"base_model:davidkim205/Rhea-72b-v0.5",
"base_model:abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-12T19:00:13Z | ---
tags:
- merge
- mergekit
- davidkim205/Rhea-72b-v0.5
- abacusai/Smaug-72B-v0.1
base_model:
- davidkim205/Rhea-72b-v0.5
- abacusai/Smaug-72B-v0.1
license: apache-2.0
---

# Pyrhea-72B
Pyrhea-72B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit):
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
```yamlname: Pyrhea-72B
models:
- model: davidkim205/Rhea-72b-v0.5
parameters:
density: 0.5
weight: 0.6
# No parameters necessary for base model
- model: abacusai/Smaug-72B-v0.1
parameters:
density: 0.5
weight: 0.4
merge_method: dare_ties
base_model: davidkim205/Rhea-72b-v0.5
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Pyrhea-72B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
allknowingroger/Rogermerge2-12B-MoE | allknowingroger | 2024-04-13T08:03:25Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiMerge-7B-slerp",
"allknowingroger/RogerMerge-7B-slerp",
"base_model:allknowingroger/MultiMerge-7B-slerp",
"base_model:allknowingroger/RogerMerge-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-13T07:56:14Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiMerge-7B-slerp
- allknowingroger/RogerMerge-7B-slerp
base_model:
- allknowingroger/MultiMerge-7B-slerp
- allknowingroger/RogerMerge-7B-slerp
---
# Rogermerge2-12B-MoE
Rogermerge2-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiMerge-7B-slerp](https://huggingface.co/allknowingroger/MultiMerge-7B-slerp)
* [allknowingroger/RogerMerge-7B-slerp](https://huggingface.co/allknowingroger/RogerMerge-7B-slerp)
## 🧩 Configuration
```yaml
base_model: allknowingroger/MultiMerge-7B-slerp
experts:
- source_model: allknowingroger/MultiMerge-7B-slerp
positive_prompts: ["what"]
- source_model: allknowingroger/RogerMerge-7B-slerp
positive_prompts: ["why"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Rogermerge2-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
tdrussell/Mixtral-8x22B-Capyboros-v1 | tdrussell | 2024-04-15T02:45:20Z | 396 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"dataset:ssmi153/Capybara-ShareGPT",
"dataset:jondurbin/airoboros-3.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-14T23:34:41Z | ---
license: apache-2.0
datasets:
- ssmi153/Capybara-ShareGPT
- jondurbin/airoboros-3.2
---
QLoRA fine-tune of [Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) on a combination of the Capybara and Airoboros datasets.
Uses Mistral instruct formatting, like this:
[INST] Describe quantum computing to a layperson. [/INST]
Model details:
- Trained with QLoRA, on 4 4090s, using my own [qlora-pipe](https://github.com/tdrussell/qlora-pipe) training script
- LoRA rank 64
- 4096 sequence length
- 2 epochs
You can find the LoRA adapter files [here](https://huggingface.co/tdrussell/Mixtral-8x22B-Capyboros-v1-lora). I have also uploaded a single quant (GGUF q4_k_s) [here](https://huggingface.co/tdrussell/Mixtral-8x22B-Capyboros-v1-GGUF-q4_k_s) if you want to try it without quantizing yourself or waiting for someone else to make all the quants. It fits with at least 16k context length on 96GB VRAM. |
Vargol/PixArt-Sigma_16bit | Vargol | 2024-05-02T11:31:27Z | 396 | 2 | diffusers | [
"diffusers",
"safetensors",
"license:openrail++",
"diffusers:PixArtSigmaPipeline",
"region:us"
]
| text-to-image | 2024-04-15T08:52:27Z | ---
license: openrail++
---
This Repo contains a diffusers format version of the PixArt-Sigma Repos
PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers
PixArt-alpha/PixArt-Sigma-XL-2-1024-MS
with the models loaded and saved in fp16 and bf16 formats, roughly halfing their sizes.
It can be used where download bandwith, memory or diskspace are relatively low, a T4 Colab instance for example.
To use in a diffusers script you currently(15/04/2024) need to use a Source distribution of Diffusers
and an extra 'patch' from the PixArt0Alpha's teams Sigma Github repo
A simple Colab notebook can be found at https://github.com/Vargol/StableDiffusionColabs/blob/main/PixArt/PixArt_Sigma.ipynb
a Diffusers script looks like this.
```py
import random
import sys
import torch
from diffusers import Transformer2DModel
from scripts.diffusers_patches import pixart_sigma_init_patched_inputs, PixArtSigmaPipeline
assert getattr(Transformer2DModel, '_init_patched_inputs', False), "Need to Upgrade diffusers: pip install git+https://github.com/huggingface/diffusers"
setattr(Transformer2DModel, '_init_patched_inputs', pixart_sigma_init_patched_inputs)
device = 'mps'
weight_dtype = torch.bfloat16
pipe = PixArtSigmaPipeline.from_pretrained(
"/Vargol/PixArt-Sigma_16bit",
torch_dtype=weight_dtype,
variant="fp16",
use_safetensors=True,
)
# Enable memory optimizations.
# pipe.enable_model_cpu_offload()
pipe.to(device)
prompt = "Cinematic science fiction film still.A cybernetic demon awaits her friend in a bar selling flaming oil drinks. The barman is a huge tree being, towering over the demon"
for i in range(4):
seed = random.randint(0, sys.maxsize)
generator = torch.Generator("mps").manual_seed(seed);
image = pipe(prompt, generator=generator, num_iferencenum_inference_steps=40).images[0]
image.save(f"pas_{seed}.png")a
```
|
IAFrance/ECE-TW3-JRGL-VHF3 | IAFrance | 2024-04-15T19:58:41Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T19:39:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- MTSAIR/MultiVerse_70B
- davidkim205/Rhea-72b-v0.5
---
# ECE-TW3-JRGL-VHF3
ECE-TW3-JRGL-VHF3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
## 🧩 Configuration |
kunkun666/kunkun_dat_llama-13b | kunkun666 | 2024-04-19T02:43:25Z | 396 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T08:32:55Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PawanKrd/Meta-Llama-3-70B-Instruct-GGUF | PawanKrd | 2024-04-18T21:37:11Z | 396 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T20:14:45Z | ---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
model_name: Llama 3 70B Instruct
base_model: meta-llama/Meta-Llama-3-70B-Instruct
inference: false
model_creator: Meta
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: PawanKrd
---
# Llama 3 70B Instruct - GGUF
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta's Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: PawanKrd/Llama-3-70B-Instruct-GGUF and below it, a specific filename to download, such as: llama-3-70b-instruct.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PawanKrd/Llama-3-70B-Instruct-GGUF llama-3-70b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download PawanKrd/Llama-3-70B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PawanKrd/Llama-3-70B-Instruct-GGUF llama-3-70b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-3-70b-instruct.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("PawanKrd/Llama-3-70B-Instruct-GGUF", model_file="llama-3-70b-instruct.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
[Pawan.Krd's Discord server](https://discord.gg/pawan)
## Credits
This README file was initially created by [TheBlok](https://huggingface.co/TheBloke) and has been modified for this repository. |
stablediffusionapi/pure-evolution-v5-inpaint | stablediffusionapi | 2024-04-28T12:20:44Z | 396 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-04-28T12:18:32Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# pure Evolution V5-inpainting API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "pure-evolution-v5-inpaint"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/pure-evolution-v5-inpaint)
Model link: [View model](https://modelslab.com/models/pure-evolution-v5-inpaint)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "pure-evolution-v5-inpaint",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Rebecca19990101/Llama3-Petro-Instruct-v1 | Rebecca19990101 | 2024-04-30T01:52:43Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"en",
"dataset:Rebecca19990101/petro-dataset-v2",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-04-29T09:02:16Z | ---
library_name: transformers
tags:
- unsloth
license: apache-2.0
datasets:
- Rebecca19990101/petro-dataset-v2
language:
- en
metrics:
- code_eval
- accuracy
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abhishek/autotrain-llama3-oh-sft-v0-3 | abhishek | 2024-04-29T14:12:30Z | 396 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-29T14:12:29Z | ---
license: other
library_name: transformers
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
KnutJaegersberg/Deita-Mixtral-8x7b | KnutJaegersberg | 2024-05-01T09:30:39Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-01T09:30:39Z | ---
license: apache-2.0
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### User:
How do you fine tune a large language model?
### Assistant:
``` |
netcat420/MFANNv0.7 | netcat420 | 2024-05-02T18:21:08Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2024-05-02T16:09:29Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
MFANN 8b version 0.7

MFANN is a Chain-of-thought trained model fine-tuned on the MFANN dataset as it stands on 5/2/2024 as it is an ever growing and expanding dataset!
|
DUAL-GPO/phi-2-gpo-final-i0 | DUAL-GPO | 2024-05-07T14:21:09Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-06T16:00:25Z | ---
license: mit
--- |
Knobi3/Evomerge2 | Knobi3 | 2024-05-19T20:15:31Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-13T07:04:31Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Evolutionary model merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using NeuralBeagle14-7B as a base. Evaluated over 100 merges.
### Models Merged
The following models were included in the merge:
* Starling-LM-7B-beta_581094980
* Mistral-7B-v0.1-flashback-v2-instruct_3664132380
* NeuralBeagle14-7B_2368216670
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: /content/evol_merge_storage/input_models/NeuralBeagle14-7B_2368216670
dtype: bfloat16
merge_method: task_arithmetic
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 8]
model: /content/evol_merge_storage/input_models/NeuralBeagle14-7B_2368216670
parameters:
weight: 0.6116678110210994
- layer_range: [0, 8]
model: /content/evol_merge_storage/input_models/Starling-LM-7B-beta_581094980
parameters:
weight: -0.24959657782037278
- layer_range: [0, 8]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1-flashback-v2-instruct_3664132380
parameters:
weight: 0.540324494683666
- sources:
- layer_range: [8, 16]
model: /content/evol_merge_storage/input_models/NeuralBeagle14-7B_2368216670
parameters:
weight: 0.3293682339424332
- layer_range: [8, 16]
model: /content/evol_merge_storage/input_models/Starling-LM-7B-beta_581094980
parameters:
weight: -0.023694567670847724
- layer_range: [8, 16]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1-flashback-v2-instruct_3664132380
parameters:
weight: -0.1930115458123503
- sources:
- layer_range: [16, 24]
model: /content/evol_merge_storage/input_models/NeuralBeagle14-7B_2368216670
parameters:
weight: 0.27340593188424295
- layer_range: [16, 24]
model: /content/evol_merge_storage/input_models/Starling-LM-7B-beta_581094980
parameters:
weight: 0.08277665681111157
- layer_range: [16, 24]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1-flashback-v2-instruct_3664132380
parameters:
weight: -0.04650853736971121
- sources:
- layer_range: [24, 32]
model: /content/evol_merge_storage/input_models/NeuralBeagle14-7B_2368216670
parameters:
weight: 0.22175238436196998
- layer_range: [24, 32]
model: /content/evol_merge_storage/input_models/Starling-LM-7B-beta_581094980
parameters:
weight: 0.3692597806977656
- layer_range: [24, 32]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1-flashback-v2-instruct_3664132380
parameters:
weight: 0.5617035813353589
``` |
RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf | RichardErkhov | 2024-05-30T09:11:02Z | 396 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T06:16:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TekniumAiroboros-Nebula-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/TekniumAiroboros-Nebula-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TekniumAiroboros-Nebula-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [TekniumAiroboros-Nebula-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [TekniumAiroboros-Nebula-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [TekniumAiroboros-Nebula-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [TekniumAiroboros-Nebula-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [TekniumAiroboros-Nebula-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [TekniumAiroboros-Nebula-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [TekniumAiroboros-Nebula-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [TekniumAiroboros-Nebula-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [TekniumAiroboros-Nebula-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [TekniumAiroboros-Nebula-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [TekniumAiroboros-Nebula-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [TekniumAiroboros-Nebula-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [TekniumAiroboros-Nebula-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [TekniumAiroboros-Nebula-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [TekniumAiroboros-Nebula-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [TekniumAiroboros-Nebula-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [TekniumAiroboros-Nebula-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [TekniumAiroboros-Nebula-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [TekniumAiroboros-Nebula-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [TekniumAiroboros-Nebula-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [TekniumAiroboros-Nebula-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_TekniumAiroboros-Nebula-7B-gguf/blob/main/TekniumAiroboros-Nebula-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 52.82 |
| ARC (25-shot) | 57.17 |
| HellaSwag (10-shot) | 81.72 |
| MMLU (5-shot) | 55.25 |
| TruthfulQA (0-shot) | 51.64 |
| Winogrande (5-shot) | 73.24 |
| GSM8K (5-shot) | 9.4 |
| DROP (3-shot) | 41.33 |
|
mradermacher/Llama-Salad-4x8B-V2-GGUF | mradermacher | 2024-06-01T07:30:35Z | 396 | 0 | transformers | [
"transformers",
"gguf",
"nsfw",
"not-for-all-audiences",
"llama-3",
"text-generation-inference",
"moe",
"mergekit",
"merge",
"en",
"base_model:HiroseKoichi/Llama-Salad-4x8B-V2",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-31T18:07:15Z | ---
base_model: HiroseKoichi/Llama-Salad-4x8B-V2
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- nsfw
- not-for-all-audiences
- llama-3
- text-generation-inference
- moe
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/HiroseKoichi/Llama-Salad-4x8B-V2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q2_K.gguf) | Q2_K | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.IQ3_XS.gguf) | IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q3_K_S.gguf) | Q3_K_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.IQ3_M.gguf) | IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q3_K_L.gguf) | Q3_K_L | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.IQ4_XS.gguf) | IQ4_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q5_K_S.gguf) | Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q6_K.gguf) | Q6_K | 20.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Salad-4x8B-V2-GGUF/resolve/main/Llama-Salad-4x8B-V2.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Sreevadan/DPO-Llama3-8b | Sreevadan | 2024-06-03T03:00:50Z | 396 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:Intel/orca_dpo_pairs",
"dataset:Open-Orca/OpenOrca",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-06-02T15:41:43Z | ---
library_name: transformers
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
- Open-Orca/OpenOrca
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
v8karlo/UNCENSORED-Phi-3-mini-4k-geminified-Q4_K_M-GGUF | v8karlo | 2024-06-06T01:57:06Z | 396 | 3 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:failspy/Phi-3-mini-4k-geminified",
"license:mit",
"region:us"
]
| text-generation | 2024-06-06T01:53:19Z | ---
language:
- multilingual
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
base_model: failspy/Phi-3-mini-4k-geminified
license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: What's the difference between a banana and a strawberry?
---
# v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF
This model was converted to GGUF format from [`failspy/Phi-3-mini-4k-geminified`](https://huggingface.co/failspy/Phi-3-mini-4k-geminified) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/failspy/Phi-3-mini-4k-geminified) for more details on the model.
Convert Safetensors to GGUF .
https://huggingface.co/spaces/ggml-org/gguf-my-repo .

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -c 2048
```
|
NikolayKozloff/RoLlama3-8b-Instruct-Q4_0-GGUF | NikolayKozloff | 2024-06-30T20:26:06Z | 396 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"ro",
"base_model:OpenLLM-Ro/RoLlama3-8b-Instruct",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-06-30T12:08:15Z | ---
base_model: OpenLLM-Ro/RoLlama3-8b-Instruct
language:
- ro
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/RoLlama3-8b-Instruct-Q4_0-GGUF
This model was converted to GGUF format from [`OpenLLM-Ro/RoLlama3-8b-Instruct`](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OpenLLM-Ro/RoLlama3-8b-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q4_0-GGUF --hf-file rollama3-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q4_0-GGUF --hf-file rollama3-8b-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q4_0-GGUF --hf-file rollama3-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/RoLlama3-8b-Instruct-Q4_0-GGUF --hf-file rollama3-8b-instruct-q4_0.gguf -c 2048
``` |
IlyaGusev/xlm_roberta_large_headline_cause_full | IlyaGusev | 2022-07-13T15:35:52Z | 395 | 3 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"ru",
"en",
"dataset:IlyaGusev/headline_cause",
"arxiv:2108.12626",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
language:
- ru
- en
tags:
- xlm-roberta-large
datasets:
- IlyaGusev/headline_cause
license: apache-2.0
widget:
- text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку"
---
# XLM-RoBERTa HeadlineCause Full
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_full"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governor’s restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"Песков опроверг свой перевод на удаленку",
"Дмитрий Песков перешел на удаленку"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wukevin/foldingdiff_cath | wukevin | 2022-10-05T19:01:50Z | 395 | 4 | transformers | [
"transformers",
"bert",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-05T17:59:57Z | ---
license: mit
---
|
lorahub/flan_t5_large-qasc_is_correct_1 | lorahub | 2023-07-24T09:38:21Z | 395 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T09:38:10Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-adversarial_qa_droberta_tell_what_it_is | lorahub | 2023-07-24T09:57:33Z | 395 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T09:57:23Z | ---
library_name: peft
---
|
TheBloke/CAMEL-13B-Combined-Data-GGUF | TheBloke | 2023-09-27T12:53:10Z | 395 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2303.17760",
"base_model:camel-ai/CAMEL-13B-Combined-Data",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T01:30:49Z | ---
license: other
model_name: CAMEL 13B Combined Data
base_model: camel-ai/CAMEL-13B-Combined-Data
inference: false
model_creator: CAMEL
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CAMEL 13B Combined Data - GGUF
- Model creator: [CAMEL](https://huggingface.co/camel-ai)
- Original model: [CAMEL 13B Combined Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Camel AI's CAMEL 13B Combined Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF)
* [CAMEL's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [camel-13b-combined.Q2_K.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [camel-13b-combined.Q3_K_S.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [camel-13b-combined.Q3_K_M.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [camel-13b-combined.Q3_K_L.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [camel-13b-combined.Q4_0.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [camel-13b-combined.Q4_K_S.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [camel-13b-combined.Q4_K_M.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [camel-13b-combined.Q5_0.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [camel-13b-combined.Q5_K_S.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [camel-13b-combined.Q5_K_M.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [camel-13b-combined.Q6_K.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [camel-13b-combined.Q8_0.gguf](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGUF/blob/main/camel-13b-combined.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/CAMEL-13B-Combined-Data-GGUF and below it, a specific filename to download, such as: camel-13b-combined.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/CAMEL-13B-Combined-Data-GGUF camel-13b-combined.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/CAMEL-13B-Combined-Data-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CAMEL-13B-Combined-Data-GGUF camel-13b-combined.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m camel-13b-combined.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/CAMEL-13B-Combined-Data-GGUF", model_file="camel-13b-combined.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Camel AI's CAMEL 13B Combined Data
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Camel AI's CAMEL 13B Combined Data fp16
These files are pytorch format fp16 model files for [Camel AI's CAMEL 13B Combined Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-fp16)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Camel AI's CAMEL 13B Combined Data
CAMEL-13B-Combined-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations collected through our [CAMEL](https://arxiv.org/abs/2303.17760) framework, 100K English public conversations from ShareGPT that can be found [here](https://github.com/lm-sys/FastChat/issues/90#issuecomment-1493250773), and 52K instructions from Alpaca dataset that can be found [here](https://github.com/tatsu-lab/stanford_alpaca/blob/761dc5bfbdeeffa89b8bff5d038781a4055f796a/alpaca_data.json). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL<sup>*</sup>-13B scores an average of **58.1**, outperfroming LLaMA-30B (58.3), and on par with LLaMA-65B(58.1)!
| Model | size | ARC-C (25 shots, acc_norm) | HellaSwag (10 shots, acc_norm) | MMLU (5 shots, acc_norm) | TruthfulQA (0 shot, mc2) | Average | Delta |
|-------------|:----:|:---------------------------:|:-------------------------------:|:-------------------------:|:-------------------------:|:-------:|-------|
| LLaMA | 13B | 50.8 | 78.9 | 37.7 | 39.9 | 51.8 | - |
| Vicuna | 13B | 47.4 | 75.2 | 39.6 | 49.8 | 53.7 | 1.9 |
| CAMEL<sup>*</sup> | 13B | 55.5 | 79.3 | 50.3 | 47.3 | 58.1 | 6.3 |
| LLaMA | 65B | 57.8 | 84.2 | 48.8 | 42.3 | **58.3** | 6.5 |
<!-- original-model-card end -->
|
TheBloke/Inkbot-13B-8k-0.2-GGUF | TheBloke | 2023-10-02T15:01:50Z | 395 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Tostino/Inkbot-13B-8k-0.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-02T14:37:18Z | ---
base_model: Tostino/Inkbot-13B-8k-0.2
inference: false
license: llama2
model_creator: Adam Brusselback
model_name: Inkbot 13B 8K 0.2
model_type: llama
prompt_template: '<#meta#>
- Date: [DATE]
- Task: [TASK TYPE]
<#system#>
{system_message}
<#chat#>
<#user#>
{prompt}
<#bot#>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Inkbot 13B 8K 0.2 - GGUF
- Model creator: [Adam Brusselback](https://huggingface.co/Tostino)
- Original model: [Inkbot 13B 8K 0.2](https://huggingface.co/Tostino/Inkbot-13B-8k-0.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Adam Brusselback's Inkbot 13B 8K 0.2](https://huggingface.co/Tostino/Inkbot-13B-8k-0.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF)
* [Adam Brusselback's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Tostino/Inkbot-13B-8k-0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Inkbot
```
<#meta#>
- Date: [DATE]
- Task: [TASK TYPE]
<#system#>
{system_message}
<#chat#>
<#user#>
{prompt}
<#bot#>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [inkbot-13b-8k-0.2.Q2_K.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [inkbot-13b-8k-0.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [inkbot-13b-8k-0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [inkbot-13b-8k-0.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [inkbot-13b-8k-0.2.Q4_0.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [inkbot-13b-8k-0.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [inkbot-13b-8k-0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [inkbot-13b-8k-0.2.Q5_0.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [inkbot-13b-8k-0.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [inkbot-13b-8k-0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [inkbot-13b-8k-0.2.Q6_K.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [inkbot-13b-8k-0.2.Q8_0.gguf](https://huggingface.co/TheBloke/Inkbot-13B-8k-0.2-GGUF/blob/main/inkbot-13b-8k-0.2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Inkbot-13B-8k-0.2-GGUF and below it, a specific filename to download, such as: inkbot-13b-8k-0.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Inkbot-13B-8k-0.2-GGUF inkbot-13b-8k-0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Inkbot-13B-8k-0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Inkbot-13B-8k-0.2-GGUF inkbot-13b-8k-0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m inkbot-13b-8k-0.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<#meta#>\n- Date: [DATE]\n- Task: [TASK TYPE]\n<#system#>\n{system_message}\n<#chat#>\n<#user#>\n{prompt}\n<#bot#>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Inkbot-13B-8k-0.2-GGUF", model_file="inkbot-13b-8k-0.2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Adam Brusselback's Inkbot 13B 8K 0.2
# Model Card for Inkbot
## Model Details
Inkbot is a conversational AI model designed to interpret and respond to structured prompts with or without contextual information. Its unique feature lies in the structured prompt system, allowing users to engage in dynamic dialogues that can evolve based on context, metadata, and user input.
### Performance
- The model excels in RAG type queries, answering from context, and overriding memory when necessary.
- It can handle very large contexts, but may sometimes enter a repeating text loop, especially during complex tasks.
- The model is intended to be more functional and less chatty, avoiding the waste of tokens on superfluous language.
## How to Use
Inkbot uses a structured prompt template system.
### Prompt Template Structure
#### With Context:
```
<#meta#>
- Date: {current_date}
- Task: {task_name}
<#system#>
{system_prompt}
<#chat#>
<#user#>
{user}
<#user_context#>
{user_context}
<#bot#>
{bot}
```
#### Without Context:
```
<#meta#>
- Date: {current_date}
- Task: {task_name}
<#system#>
{system_prompt}
<#chat#>
<#user#>
{user}
<#bot#>
{bot}
```
### Examples
#### With Context:
```
<#meta#>
- Date: 2023-07-20
- Task: general
<#system#>
A dialogue focused on determining dates based on context clues.
<#chat#>
<#user#>
What is today's date?
<#bot#>
The date is: 2023-07-20
<#user#>
When did the event in the provided context take place?
<#user_context#>
It's the day after new years eve, and I'm starting 2018 with a headache.
<#bot#>
The event in the context took place on: 2018-01-01
```
#### Without Context:
```
<#meta#>
- Date: 2023-07-20
- Task: general
<#system#>
A simple dialogue about daily queries.
<#chat#>
<#user#>
Will it rain today?
<#bot#>
I am not sure, without knowing your location I am unable to determine the weather.
<#user#>
Can you compose a haiku for me?
<#bot#>
Whispers of the breeze,
Nature's melodies unfold,
Dawn dispels the night.
```
## Task Options
The model has been trained on a variety of tasks. Below is a breakdown of these tasks, along with example prompts to help guide your usage.
### 2. Content Generation
Tasks in this category involve creating or expanding content.
#### kg_writer (800 examples)
*Example Prompts:*
- "Using the provided knowledge graph, write an article about the topics and entities in the graph, incorporating the linked ideas. Use idea tags while writing to help focus."
- "Construct a story based on the information in the knowledge graph."
#### summary (1,600 examples)
*Example Prompts:*
- "Generate an extensive summary of the given document."
- "Please read the provided document to understand the context and content. Use this understanding to generate a summary. Separate the article into chunks, and sequentially create a summary for each chunk. Give me a final summary in the end."
#### paraphrase (1,100 examples)
*Example Prompts:*
- "Rephrase the following sentence while retaining its original meaning."
- "Can you provide an alternative wording for the paragraph below?"
---
### 3. Content Analysis
Tasks in this category evaluate, grade, or filter content.
#### grading (400 examples)
*Example Prompts:*
- "Based on the provided document, please rate the usefulness as training data on a scale from 0-5."
#### sponsorblock (5,200 examples)
*Example Prompts:*
- "Read the document and extract any sentences or phrases that contain explicit mentions of sponsorship, promotional partnerships, or any form of paid content."
---
### 4. Information Structuring
Tasks in this category involve the structured representation or extraction of information.
#### kg (3,600 examples)
*Example Prompts:*
- "Create a Knowledge Graph from the document provided."
- "Extract key concepts and relationships from the conversation to form a knowledge graph."
---
### 5. General Interaction
Tasks in this category are designed for general questions and interactions.
#### general (1,600 examples)
*Example Prompts:*
- "What is the capital of France?"
- "Explain particle physics to a 5 years old."
## Limitations
- Adhere to the prompt structure for best results.
- When providing contextual details, clarity is essential for Inkbot to derive accurate and meaningful responses.
- The overriding memory from user_context property generally only works for the next prompt or two, after which the model reverts to its original behavior.
- On complex tasks, like creating a coherent story based on a set of facts from context, there's a potential for a repeating text loop as context fills.
- Sometimes the model doesn't know when to end a knowledge graph, which can result in adding nodes and edges until it runs out of context.
## Additional Notes
- Use rope-freq-scale=0.5 or compress_pos_emb=2 for 8k ctx
- The 'date', 'task', and 'system' are crucial metadata components that need to be provided outside the core dialogue.
- Use the 'user_context' when you want to offer supplementary context that guides Inkbot's response. You can interleave it in the chat log as necessary. It comes after the users instruction.
- The specific tag format, such as `<#word#>`, is used to because there are filters in a lot of APIs for <|word|> and this makes interactions easier.
---
license: llama2
---
<!-- original-model-card end -->
|
maddes8cht/h2oai-h2ogpt-gm-oasst1-multilang-2048-falcon-7b-gguf | maddes8cht | 2023-11-19T21:34:37Z | 395 | 1 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-22T17:02:59Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# h2ogpt-gm-oasst1-multilang-2048-falcon-7b - GGUF
- Model creator: [h2oai](https://huggingface.co/h2oai)
- Original model: [h2ogpt-gm-oasst1-multilang-2048-falcon-7b](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate`, `torch` and `einops` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b",
torch_dtype=torch.bfloat16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 4544)
(h): ModuleList(
(0-31): 32 x DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=4544, out_features=4672, bias=False)
(dense): Linear(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
osunlp/TableLlama | osunlp | 2023-12-07T23:51:11Z | 395 | 23 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:osunlp/TableInstruct",
"arxiv:2311.09206",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-20T01:50:22Z | ---
license: cc-by-4.0
language:
- en
datasets:
- osunlp/TableInstruct
---
---
# TableLlama: Towards Open Large Generalist Models for Tables
Project Page: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
Paper: [https://arxiv.org/abs/2311.09206](https://arxiv.org/abs/2311.09206)
Dataset: [https://huggingface.co/datasets/osunlp/TableInstruct/](https://huggingface.co/datasets/osunlp/TableInstruct/)
Code: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
## Introduction
We introduce TableLlama, an open-source large generalist model specifically tailored for various table-based tasks. The TableLlama model is trained on 🤗 [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), a meticulously curated instruction tuning dataset for tables. TableLlama is tuned on 2.6 million table-based task data, and can handle up to 8K context!
## Model
[TableLlama-7B](https://huggingface.co/osunlp/TableLlama/)
## Data
The models are trained on the 🤗 [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), which includes a comprehensive table-based instruction tuning dataset that covers a variety of real-world tables and realistic tasks. We include 14 datasets of 11 tasks in total. Check out the dataset card for more details.
## Training Procedure
The models are fine-tuned with the TableInstruct dataset using LongLoRA (7B), fully fine-tuning version as the base model, which replaces the vanilla attention mechanism of the original Llama-2 (7B) with shift short attention. The training takes 9 days on a 48 80*A100 cluster. Check out our paper for more details.
## Evaluation
The models are evaluated on 8 in-domain datasets of 8 tasks and 6 out-of-domain datasets of 4 tasks.
## Usage
You can use the models through Huggingface's Transformers library.
Check our Github repo for more advanced use: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
## Prompt Format
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that
appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Question:
{question}
### Response:
```
## Limitations
We've tried our best to build table generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the table tasks and datasets. Still not all table-based tasks can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@misc{zhang2023tablellama,
title={TableLlama: Towards Open Large Generalist Models for Tables},
author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
year={2023},
eprint={2311.09206},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lightblue/qarasu-14B-chat-plus-unleashed | lightblue | 2024-04-09T07:58:42Z | 395 | 19 | transformers | [
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"ja",
"en",
"dataset:OpenAssistant/oasst1",
"dataset:zetavg/ShareGPT-Processed",
"dataset:augmxnt/ultra-orca-boros-en-ja-v1",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-12-22T10:34:41Z | ---
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
datasets:
- OpenAssistant/oasst1
- zetavg/ShareGPT-Processed
- augmxnt/ultra-orca-boros-en-ja-v1
language:
- ja
- en
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c8a2e01c25d2c581a381c1/9CbN4lDGU42c-7DmK_mGM.png" alt="drawing" width="600"/>
</p>
Qwen/Qwen-14B-Chat + Karasu's finetuning datasets
# Demo ・ モデルのデモ
[Model demo ・ モデルのデモ](https://lightblue-qarasu.serveo.net/)
# Blog post・説明の記事
[Blog post・説明の記事](https://note.com/peter_lightblue/n/ne08a7c8cc47a)
# Evaluation

In our internal evaluations, we find the Qarasu model to have particularly high performance on the MTーBench benchmark. We are currently awaiting external evaluations.
# How to use
### Hugggingface
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lightblue/qarasu-14B-chat-plus-unleashed", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)
```
### VLLM
```python
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/qarasu-14B-chat-plus-unleashed", trust_remote_code=True)
messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Base checkpoint
[Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat)
# Training datasets (total ~7B)
The same as the 'plus' checkpoint, but with about 6K refusals ("申し訳ありませんが、。。。") filtered out from the category dataset
* Lightblue's suite of Kujira datasets (unreleased)
* Lightblue's own question-based datasets (unreleased)
* Lightblue's own category-based datasets (unreleased)
* [OASST](https://huggingface.co/datasets/OpenAssistant/oasst1) (Japanese chats only)
* [ShareGPT](https://huggingface.co/datasets/zetavg/ShareGPT-Processed) (Japanese chats only)
* [augmxnt/ultra-orca-boros-en-ja-v1](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) (['airoboros', 'slimorca', 'ultrafeedback', 'airoboros_ja_new'] only)
# Developed by
<a href="https://www.lightblue-tech.com">
<img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/>
</a>
### Engineers
Peter Devine
Sho Higuchi
### Advisors
Yuuki Yamanaka
Atom Sonoda
### Project manager
Shunichi Taniguchi
### Dataset evaluator
Renju Aoki |
Artefact2/Cat-8x7B-GGUF | Artefact2 | 2024-03-11T11:47:35Z | 395 | 1 | null | [
"gguf",
"not-for-all-audiences",
"en",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-02-13T23:40:37Z | ---
language:
- en
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
---
These are GGUF quantized versions of [Envoid/Cat-8x7B](https://huggingface.co/Envoid/Cat-8x7B).
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later.
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |
levimorin/5DLJiMEmqqsE1XPz9KvUvWaiBosW9EGHp8KRk48uoyhogvts_vgg | levimorin | 2024-03-08T19:10:13Z | 395 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-03T04:59:39Z | Entry not found |
fatgong/5CXtYNwRRCQEdY9kgLdhZmNydHUEXDQ5mxHYcakoeWBF1d6m_vgg | fatgong | 2024-03-16T06:02:10Z | 395 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-09T14:10:03Z | Entry not found |
gxkok/starlight-xl-animated-v3 | gxkok | 2024-03-25T02:08:06Z | 395 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-03-25T01:47:51Z | ---
license: openrail++
library_name: diffusers
---
|
Joseph717171/BigYi-15.75B-200k | Joseph717171 | 2024-04-10T05:44:14Z | 395 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.04652",
"arxiv:2311.16502",
"arxiv:2401.11944",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-10T04:51:46Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: other
license_name: yi-license
license_link: LICENSE
---

# Credit for the model card's description goes to ddh0, mergekit, and, [01-ai](https://huggingface.co/01-ai)
# Inspired by [abacusai's bigyi-15b](https://huggingface.co/abacusai/bigyi-15b)
# BigYi-15.75B-200K
This is BigYi-15.75B-200K, a layer-interleaved version of [01-ai/Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K).
This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 9 billion parameter model.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
BigYi-15.75B-200K is a base/completion model. So, there is no chat template.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
# Layer-Interleaving of Yi-9B-200k
# where n = 48 (The number of layers the model originally had)
# m = 12 (The number of layers to interleave per row)
# M = 36 = 3m (The number of layers interleaved into the model)
# s = 48 + 36 = n + M (The number of layers the model will have after the layer interleaving)
# P = 15.75B parameters = (s/n)*9 (The number of parameters the model will have after the layer interleaving)
slices:
- sources:
- layer_range: [0, 12]
model: /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
# Interleaved Layer
- sources:
- layer_range: [6, 18]
model: /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
- sources:
- layer_range: [12, 24]
model: /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
# Interleaved Layer
- sources:
- layer_range: [18, 30]
model: /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
- sources:
- layer_range: [24, 36]
model: /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
# Interleaved Layer
- sources:
- layer_range: [30, 42]
model: /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
- sources:
- layer_range: [36, 48]
model: /Users/jsarnecki/opt/workspace/01-ai/Yi-9B-200K
```
<br>
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
<img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg">
</picture>
</br>
</br>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml">
<img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg">
</a>
</div>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue">
</a>
</div>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
<img src="https://img.shields.io/badge/Model_License-Yi_License-lightblue">
</a>
</div>
<div style="display: inline-block;">
<a href="mailto:[email protected]">
<img src="https://img.shields.io/badge/✉️[email protected]">
</a>
</div>
</div>
<div align="center">
<h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a>
</p>
<p align="center">
👩🚀 Ask questions or discuss ideas on <a href="01-ai/Yi · Discussions" target="_blank"> GitHub </a>
</p>
<p align="center">
👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a>
</p>
<p align="center">
📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a>
</p>
<p align="center">
📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a>
</p>
<!-- DO NOT REMOVE ME -->
<hr>
<details open>
<summary></b>📕 Table of Contents</b></summary>
- [What is Yi?](#what-is-yi)
- [Introduction](#introduction)
- [Models](#models)
- [Chat models](#chat-models)
- [Base models](#base-models)
- [Model info](#model-info)
- [News](#news)
- [How to use Yi?](#how-to-use-yi)
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [llama.cpp](#quick-start---llamacpp)
- [conda-lock](#quick-start---conda-lock)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [Learning hub](#learning-hub)
- [Why Yi?](#why-yi)
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Base model performance](#base-model-performance)
- [Chat model performance](#chat-model-performance)
- [Tech report](#tech-report)
- [Citation](#citation)
- [Who can use Yi?](#who-can-use-yi)
- [Misc.](#misc)
- [Acknowledgements](#acknowledgments)
- [Disclaimer](#disclaimer)
- [License](#license)
</details>
<hr>
# What is Yi?
## Introduction
- 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/).
- 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
- Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
- Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).
- 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
<details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br>
> 💡 TL;DR
>
> The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama.
- Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
- Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
- However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/).
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## News
<details>
<summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary>
</details>
<details open>
<summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary>
</details>
<details open>
<summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>
<br>
In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance.
</details>
<details open>
<summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary>
<br>
<code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.
</details>
<details open>
<summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary>
<br>
<code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li>
</details>
<details>
<summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary>
<br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.
- `Yi-34B-Chat`
- `Yi-34B-Chat-4bits`
- `Yi-34B-Chat-8bits`
- `Yi-6B-Chat`
- `Yi-6B-Chat-4bits`
- `Yi-6B-Chat-8bits`
You can try some of them interactively at:
- [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Replicate](https://replicate.com/01-ai)
</details>
<details>
<summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary>
</details>
<details>
<summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>
<br>Application form:
- [English](https://cn.mikecrm.com/l91ODJf)
- [Chinese](https://cn.mikecrm.com/gnEZjiQ)
</details>
<details>
<summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary>
<br>This release contains two base models with the same parameter sizes as the previous
release, except that the context window is extended to 200K.
</details>
<details>
<summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary>
<br>The first public release contains two bilingual (English/Chinese) base models
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
sequence length and can be extended to 32K during inference time.
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Models
Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment).
### Chat models
| Model | Download
|---|---
Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary)
Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary)
Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary)
Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary)
Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary)
Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary)
<sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub>
### Base models
| Model | Download |
|---|---|
Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary)
Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary)
Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B)
Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K)
Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary)
Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary)
<sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub>
### Model info
- For chat and base models
<table>
<thead>
<tr>
<th>Model</th>
<th>Intro</th>
<th>Default context window</th>
<th>Pretrained tokens</th>
<th>Training Data Date</th>
</tr>
</thead>
<tbody><tr>
<td>6B series models</td>
<td>They are suitable for personal and academic use.</td>
<td rowspan="3">4K</td>
<td>3T</td>
<td rowspan="3">Up to June 2023</td>
</tr>
<tr>
<td>9B series models</td>
<td>It is the best at coding and math in the Yi series models.</td>
<td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td>
</tr>
<tr>
<td>34B series models</td>
<td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability.</td>
<td>3T</td>
</tr>
</tbody></table>
- For chat models
<details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary>
<ul>
<br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.
<br>However, this higher diversity might amplify certain existing issues, including:
<li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li>
<li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li>
<li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li>
<li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li>
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# How to use Yi?
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- [llama.cpp](#quick-start---llamacpp)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [Learning hub](#learning-hub)
## Quick start
Getting up and running with Yi models is simple with multiple choices available.
### Choose your path
Select one of the following paths to begin your journey with Yi!

#### 🎯 Deploy Yi locally
If you prefer to deploy Yi models locally,
- 🙋♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:
- [pip](#quick-start---pip)
- [Docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- 🙋♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp).
#### 🎯 Not to deploy Yi locally
If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.
##### 🙋♀️ Run Yi with APIs
If you want to explore more features of Yi, you can adopt one of these methods:
- Yi APIs (Yi official)
- [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access!
- [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate)
##### 🙋♀️ Run Yi in playground
If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:
- [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
- [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate)
##### 🙋♀️ Chat with Yi
If you want to chat with Yi, you can use one of these online services, which offer a similar user experience:
- [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face)
- No registration is required.
- [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - pip
This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference.
#### Step 0: Prerequisites
- Make sure Python 3.10 or a later version is installed.
- If you want to run other Yi models, see [software and hardware requirements](#deployment).
#### Step 1: Prepare your environment
To set up the environment and install the required packages, execute the following command.
```bash
git clone https://github.com/01-ai/Yi.git
cd yi
pip install -r requirements.txt
```
#### Step 2: Download the Yi model
You can download the weights and tokenizer of Yi models from the following sources:
- [Hugging Face](https://huggingface.co/01-ai)
- [ModelScope](https://www.modelscope.cn/organization/01ai/)
- [WiseModel](https://wisemodel.cn/organization/01.AI)
#### Step 3: Perform inference
You can perform inference with Yi chat or base models as below.
##### Perform inference with Yi chat model
1. Create a file named `quick_start.py` and copy the following content to it.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = '<your-model-path>'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM.
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
2. Run `quick_start.py`.
```bash
python quick_start.py
```
Then you can see an output similar to the one below. 🥳
```bash
Hello! How can I assist you today?
```
##### Perform inference with Yi base model
- Yi-34B
The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).
You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo).
```bash
python demo/text_generation.py --model <your-model-path>
```
Then you can see an output similar to the one below. 🥳
<details>
<summary>Output. ⬇️ </summary>
<br>
**Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry,
**Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...
</details>
- Yi-9B
Input
```bash
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_DIR = "01-ai/Yi-9B"
model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False)
input_text = "# write the quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output
```bash
# write the quick sort algorithm
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
# test the quick sort algorithm
print(quick_sort([3, 6, 8, 10, 1, 2, 1]))
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - Docker
<details>
<summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference.
<h4>Step 0: Prerequisites</h4>
<p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p>
<h4> Step 1: Start Docker </h4>
<pre><code>docker run -it --gpus all \
-v <your-model-path>: /models
ghcr.io/01-ai/yi:latest
</code></pre>
<p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p>
<h4>Step 2: Perform inference</h4>
<p>You can perform inference with Yi chat or base models as below.</p>
<h5>Perform inference with Yi chat model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>model_path = '<your-model-mount-path>'</code> instead of <code>model_path = '<your-model-path>'</code>.</p>
<h5>Perform inference with Yi base model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>--model <your-model-mount-path>'</code> instead of <code>model <your-model-path></code>.</p>
</details>
### Quick start - conda-lock
<details>
<summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary>
<br>
You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies.
<br>
To install the dependencies, follow these steps:
1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>.
2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.
</details>
### Quick start - llama.cpp
<details>
<summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p>
- [Step 0: Prerequisites](#step-0-prerequisites)
- [Step 1: Download llama.cpp](#step-1-download-llamacpp)
- [Step 2: Download Yi model](#step-2-download-yi-model)
- [Step 3: Perform inference](#step-3-perform-inference)
#### Step 0: Prerequisites
- This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.
- Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine.
#### Step 1: Download `llama.cpp`
To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command.
```bash
git clone [email protected]:ggerganov/llama.cpp.git
```
#### Step 2: Download Yi model
2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command.
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF
```
2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command.
```bash
git-lfs pull --include yi-chat-6b.Q2_K.gguf
```
#### Step 3: Perform inference
To perform inference with the Yi model, you can use one of the following methods.
- [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal)
- [Method 2: Perform inference in web](#method-2-perform-inference-in-web)
##### Method 1: Perform inference in terminal
To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command.
> ##### Tips
>
> - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model.
>
> - By default, the model operates in completion mode.
>
> - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage.
```bash
make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e
...
How do you feed your pet fox? Please answer this question in 6 simple steps:
Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables.
Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day.
Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise.
Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress.
Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections.
Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care.
...
```
Now you have successfully asked a question to the Yi model and got an answer! 🥳
##### Method 2: Perform inference in web
1. To initialize a lightweight and swift chatbot, run the following command.
```bash
cd llama.cpp
./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf
```
Then you can get an output like this:
```bash
...
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 5000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Pro
ggml_metal_init: picking default device: Apple M2 Pro
ggml_metal_init: ggml.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67)
llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67)
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 159.19 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67)
Available slots:
-> Slot 0 - max context: 2048
llama server listening at http://0.0.0.0:8080
```
2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar.

3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer.

</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Web demo
You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario).
[Step 1: Prepare your environment](#step-1-prepare-your-environment).
[Step 2: Download the Yi model](#step-2-download-the-yi-model).
Step 3. To start a web service locally, run the following command.
```bash
python demo/web_demo.py -c <your-model-path>
```
You can access the web UI by entering the address provided in the console into your browser.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Fine-tuning
```bash
bash finetune/scripts/run_sft_Yi_6b.sh
```
Once finished, you can compare the finetuned model and the base model with the following command:
```bash
bash finetune/scripts/run_eval.sh
```
<details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul>
### Finetune code for Yi 6B and 34B
#### Preparation
##### From Image
By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model.
You can also prepare your customized dataset in the following `jsonl` format:
```json
{ "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." }
```
And then mount them in the container to replace the default ones:
```bash
docker run -it \
-v /path/to/save/finetuned/model/:/finetuned-model \
-v /path/to/train.jsonl:/yi/finetune/data/train.json \
-v /path/to/eval.jsonl:/yi/finetune/data/eval.json \
ghcr.io/01-ai/yi:latest \
bash finetune/scripts/run_sft_Yi_6b.sh
```
##### From Local Server
Make sure you have conda. If not, use
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
source ~/.bashrc
```
Then, create a conda env:
```bash
conda create -n dev_env python=3.10 -y
conda activate dev_env
pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7
```
#### Hardware Setup
For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.
For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh).
A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.
#### Quick Start
Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like:
```bash
|-- $MODEL_PATH
| |-- config.json
| |-- pytorch_model-00001-of-00002.bin
| |-- pytorch_model-00002-of-00002.bin
| |-- pytorch_model.bin.index.json
| |-- tokenizer_config.json
| |-- tokenizer.model
| |-- ...
```
Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static.
```bash
|-- $DATA_PATH
| |-- data
| | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet
| | |-- test-00000-of-00001-8c7c51afc6d45980.parquet
| |-- dataset_infos.json
| |-- README.md
```
`finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG)
```bash
|-- $DATA_PATH
|--data
|-- train.jsonl
|-- eval.jsonl
```
`cd` into the scripts folder, copy and paste the script, and run. For example:
```bash
cd finetune/scripts
bash run_sft_Yi_6b.sh
```
For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes.
For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.
#### Evaluation
```bash
cd finetune/scripts
bash run_eval.sh
```
Then you'll see the answer from both the base model and the finetuned model.
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quantization
#### GPT-Q
```bash
python quantization/gptq/quant_autogptq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/gptq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul>
#### GPT-Q quantization
[GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization)
method. It saves memory and provides potential speedups while retaining the accuracy
of the model.
Yi models can be GPT-Q quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and
[exllama](https://github.com/turboderp/exllama).
And the huggingface transformers has integrated optimum and auto-gptq to perform
GPTQ quantization on language models.
##### Do Quantization
The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization:
```bash
python quant_autogptq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
#### AWQ
```bash
python quantization/awq/quant_autoawq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/awq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul>
#### AWQ quantization
[AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization)
method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.
Yi models can be AWQ quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
##### Do Quantization
The `quant_autoawq.py` script is provided for you to perform AWQ quantization:
```bash
python quant_autoawq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Deployment
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
#### Software requirements
Before using Yi quantized models, make sure you've installed the correct software listed below.
| Model | Software
|---|---
Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi)
Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation)
#### Hardware requirements
Before deploying Yi in your environment, make sure your hardware meets the following requirements.
##### Chat models
| Model | Minimum VRAM | Recommended GPU Example |
|:----------------------|:--------------|:-------------------------------------:|
| Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) |
| Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) |
| Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) |
| Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) |
| Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) |
| Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) |
Below are detailed minimum VRAM requirements under different batch use cases.
| Model | batch=1 | batch=4 | batch=16 | batch=32 |
| ----------------------- | ------- | ------- | -------- | -------- |
| Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB |
| Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB |
| Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB |
| Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB |
| Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB |
| Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB |
##### Base models
| Model | Minimum VRAM | Recommended GPU Example |
|----------------------|--------------|:-------------------------------------:|
| Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) |
| Yi-6B-200K | 50 GB | 1 x A800 (80 GB) |
| Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) |
| Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) |
| Yi-34B-200K | 200 GB | 4 x A800 (80 GB) |
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Learning hub
<details>
<summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary>
<br>
Welcome to the Yi learning hub!
Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.
The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!
At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.
With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳
#### Tutorials
##### English tutorials
| Type | Deliverable | Date | Author |
|-------------|--------------------------------------------------------|----------------|----------------|
| Video | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) |
| Blog | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) |
| Video | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) |
| Video | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) |
##### Chinese tutorials
| Type | Deliverable | Date | Author |
|-------------|--------------------------------------------------------|----------------|----------------|
| Blog | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) |
| Blog | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) |
| Blog | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) |
| Blog | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) |
| Blog | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) |
| Blog | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) |
| Video | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
| Video | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2023-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
</details>
# Why Yi?
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Chat model performance](#chat-model-performance)
- [Base model performance](#base-model-performance)
- [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k)
- [Yi-9B](#yi-9b)
## Ecosystem
Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
### Upstream
The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.
For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto")
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Downstream
> 💡 Tip
>
> - Feel free to create a PR and share the fantastic work you've built using the Yi series models.
>
> - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`.
#### Serving
If you want to get up with Yi in a few minutes, you can use the following services built upon Yi.
- Yi-34B-Chat: you can chat with Yi using one of the following platforms:
- [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand!
- [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs.
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization.
#### Quantization
If you have limited computational capabilities, you can use Yi's quantized models as follows.
These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
- [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ)
#### Fine-tuning
If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.
- [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi.
This is not an exhaustive list for Yi, but to name a few sorted on downloads:
- [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ)
- [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ)
- [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ)
- [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
- [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm).
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset.
#### API
- [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box.
- [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Tech report
For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652).
### Citation
```
@misc{ai2024yi,
title={Yi: Open Foundation Models by 01.AI},
author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai},
year={2024},
eprint={2403.04652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Benchmarks
- [Chat model performance](#chat-model-performance)
- [Base model performance](#base-model-performance)
### Chat model performance
Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.

<details>
<summary> Evaluation methods and challenges. ⬇️ </summary>
- **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.
- **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed.
- **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.
- **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.
<strong>*</strong>: C-Eval results are evaluated on the validation datasets
</details>
### Base model performance
#### Yi-34B and Yi-34B-200K
The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.

<details>
<summary> Evaluation methods. ⬇️</summary>
- **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.
- **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.
- **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.
- **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.
- **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.
- **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code".
- **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.
</details>
#### Yi-9B
Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.

- In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.

- In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Who can use Yi?
Everyone! 🙌 ✅
- The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt)
- For free commercial use, you only need to [complete this form](https://www.lingyiwanwu.com/yi-license) to get a Yi Model Commercial License.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Misc.
### Acknowledgments
A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.
[](https://github.com/01-ai/yi/graphs/contributors)
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Disclaimer
We use data compliance checking algorithms during the training process, to
ensure the compliance of the trained model to the best of our ability. Due to
complex data and the diversity of language model usage scenarios, we cannot
guarantee that the model will generate correct, and reasonable output in all
scenarios. Please be aware that there is still a risk of the model producing
problematic outputs. We will not be responsible for any risks and issues
resulting from misuse, misguidance, illegal usage, and related misinformation,
as well as any associated data security concerns.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### License
The source code in this repo is licensed under the [Apache 2.0
license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
For free commercial use, you only need to send an email to [get official commercial permission](https://www.lingyiwanwu.com/yi-license).
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p> |
NotAiLOL/Boundary-mistral-4x7b-MoE | NotAiLOL | 2024-04-18T05:39:53Z | 395 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"HuggingFaceH4/zephyr-7b-beta",
"mistralai/Mistral-7B-Instruct-v0.2",
"teknium/OpenHermes-2.5-Mistral-7B",
"meta-math/MetaMath-Mistral-7B",
"Mistral",
"conversational",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:meta-math/MetaMath-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-17T11:36:00Z | ---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- HuggingFaceH4/zephyr-7b-beta
- mistralai/Mistral-7B-Instruct-v0.2
- teknium/OpenHermes-2.5-Mistral-7B
- meta-math/MetaMath-Mistral-7B
- Mistral
base_model:
- HuggingFaceH4/zephyr-7b-beta
- mistralai/Mistral-7B-Instruct-v0.2
- teknium/OpenHermes-2.5-Mistral-7B
- meta-math/MetaMath-Mistral-7B
---
# Boundary-mistral-4x7b-MoE
Boundary-mistral-4x7b-MoE is a Mixture of Experts (MoE) made with the following models:
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
## 🧩 Configuration
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: float16
gate_mode: cheap_embed
experts:
- source_model: HuggingFaceH4/zephyr-7b-beta
positive_prompts: ["You are an helpful general-pupose assistant."]
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts: ["You are helpful assistant."]
- source_model: teknium/OpenHermes-2.5-Mistral-7B
positive_prompts: ["You are helpful a coding assistant."]
- source_model: meta-math/MetaMath-Mistral-7B
positive_prompts: ["You are an assistant good at math."]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "NotAiLOL/Boundary-mistral-4x7b-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
neopolita/meta-llama-3-8b-instruct-gguf | neopolita | 2024-05-14T12:16:50Z | 395 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-04-18T20:53:35Z | ---
{}
---
# GGUF quants for [**meta-llama/Meta-Llama-3-8B-Instruct**](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
flammenai/flammen19X-mistral-7B | flammenai | 2024-04-19T03:10:32Z | 395 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"nsfw",
"not-for-all-audiences",
"dataset:ResplendentAI/NSFW_RP_Format_NoQuote",
"base_model:flammenai/flammen18X-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-19T02:16:16Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen18X-mistral-7B
datasets:
- ResplendentAI/NSFW_RP_Format_NoQuote
tags:
- nsfw
- not-for-all-audiences
---

# flammen19X-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [ResplendentAI/NSFW_RP_Format_NoQuote](https://huggingface.co/datasets/ResplendentAI/NSFW_RP_Format_NoQuote).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence.
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune Mistral-7b with SFT+TRL](https://colab.research.google.com/drive/1o_w0KastmEJNVwT5GoqMCciH-18ca5WS?usp=sharing) - [Maxime Labonne](https://huggingface.co/mlabonne)
|
AIGym/Llama-2-7b-chat-hf-structured-responses-SFT | AIGym | 2024-04-20T17:24:57Z | 395 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:AIGym/function_calling_v2",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-20T17:19:50Z | ---
library_name: transformers
license: apache-2.0
datasets:
- AIGym/function_calling_v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LiteLLMs/Llama-3-8B-16K-GGUF | LiteLLMs | 2024-04-23T16:33:00Z | 395 | 0 | null | [
"gguf",
"GGUF",
"dataset:Yukang/LongAlpaca-16k-length",
"region:us"
]
| null | 2024-04-23T15:50:17Z |
---
tags:
- GGUF
datasets:
- Yukang/LongAlpaca-16k-length
quantized_by: andrijdavid
---
# Llama-3-8B-16K-GGUF
- Original model: [Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-8B-16K-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-8B-16K-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama-3-8B-16K
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the `Yukang/LongAlpaca-16k-length` dataset.
`rope_theta` was set to `1000000.0`. Trained with Axolotl.
<!-- original-model-card end -->
|
tlphams/Wizard-Zephyr-Orpo-8x22B | tlphams | 2024-05-06T15:51:30Z | 395 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:alpindale/WizardLM-2-8x22B",
"base_model:HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-24T07:24:44Z | ---
base_model:
- alpindale/WizardLM-2-8x22B
- HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-sa-4.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Models Merged
The following models were included in the merge:
* [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B)
* [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1)
## Benchmark results
### 1. MT-Bench from lmsys
We adapted the code from [FastChat](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) to benchmark our model with GPT-4 as a judge. Here is the result
```markdown
| | Model | Turn | Score |
|--------|-----------------------------------------|------|----------|
| First | tlphams/Wizard-Zephyr-Orpo-8x22B | 1 | 9.1625 |
| | mistralai/Mixtral-8x22B-Instruct-v0.1 | 1 | 9.1500 |
| Second | tlphams/Wizard-Zephyr-Orpo-8x22B | 2 | 8.873418 |
| | mistralai/Mixtral-8x22B-Instruct-v0.1 | 2 | 8.250000 |
| Average| tlphams/Wizard-Zephyr-Orpo-8x22B | | 9.018868 |
| | mistralai/Mixtral-8x22B-Instruct-v0.1 | | 8.700000 |
```
The score is slightly lower than [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.