modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
infinitymatter/llama-3-8b-chat-doctor | infinitymatter | 2024-06-27T04:29:27Z | 425 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-26T07:31:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
michelecafagna26/gpt2-medium-finetuned-sst2-sentiment | michelecafagna26 | 2023-04-06T13:54:25Z | 424 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-classification",
"en",
"dataset:sst2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2023-02-11T18:57:15Z | ---
license: apache-2.0
language: en
datasets:
- sst2
metrics:
- precision
- recall
- f1
tags:
- text-classification
---
# GPT-2-medium fine-tuned for Sentiment Analysis 👍👎
[OpenAI's GPT-2](https://openai.com/blog/tags/gpt-2/) medium fine-tuned on [SST-2](https://huggingface.co/datasets/st2) dataset for **Sentiment Analysis** downstream task.
## Details of GPT-2
The **GPT-2** model was presented in [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) by *Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever*
## Model fine-tuning 🏋️
The model has been finetuned for 10 epochs on standard hyperparameters
## Val set metrics 🧾
|precision | recall | f1-score |support|
|----------|----------|---------|----------|-------|
|negative | 0.92 | 0.92| 0.92| 428 |
|positive | 0.92 | 0.93| 0.92| 444 |
|----------|----------|---------|----------|-------|
|accuracy| | | 0.92| 872 |
|macro avg| 0.92| 0.92| 0.92| 872 |
|weighted avg| 0.92| 0.92| 0.92| 872 |
## Model in Action 🚀
```python
from transformers import GPT2Tokenizer, GPT2ForSequenceClassification
tokenizer = GPT2Tokenizer.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment")
model = GPT2ForSequenceClassification.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment")
inputs = tokenizer("I love it", return_tensors="pt")
model(**inputs).logits.argmax(axis=1)
# 1: Positive, 0: Negative
# Output: tensor([1])
```
> This model card is based on "mrm8488/t5-base-finetuned-imdb-sentiment" by Manuel Romero/@mrm8488 |
kaczmarj/colorectal-tiatoolbox-resnet50.kather100k | kaczmarj | 2023-08-03T17:41:23Z | 424 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"tiatoolbox",
"digital pathology",
"histology",
"kather",
"colorectal",
"image-classification",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-06-21T16:13:27Z | ---
license: cc-by-4.0
tags:
- tiatoolbox
- digital pathology
- histology
- kather
- colorectal
pipeline_tag: image-classification
---
# ResNet50 trained on Kather100K (via TIA Toolbox)
This is a re-hosted version of the model available in the TIA Toolbox model zoo (licensed CC-BY-4.0).
# Reusing the model
Coming soon...
# Dataset
The Kather100K dataset can be found on Zenodo https://zenodo.org/record/1214456.
# References
```bibtex
@inproceedings{he2016deep,
title={Deep residual learning for image recognition},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={770--778},
year={2016}
}
@dataset{kather_jakob_nikolas_2018_1214456,
author = {Kather, Jakob Nikolas and
Halama, Niels and
Marx, Alexander},
title = {{100,000 histological images of human colorectal
cancer and healthy tissue}},
month = apr,
year = 2018,
publisher = {Zenodo},
version = {v0.1},
doi = {10.5281/zenodo.1214456},
url = {https://doi.org/10.5281/zenodo.1214456}
}
@article{pocock2022tiatoolbox,
title={TIAToolbox as an end-to-end library for advanced tissue image analytics},
author={Pocock, Johnathan and Graham, Simon and Vu, Quoc Dang and Jahanifar, Mostafa and Deshpande, Srijay and Hadjigeorghiou, Giorgos and Shephard, Adam and Bashir, Raja Muhammad Saad and Bilal, Mohsin and Lu, Wenqi and others},
journal={Communications medicine},
volume={2},
number={1},
pages={120},
year={2022},
publisher={Nature Publishing Group UK London}
}
``` |
nicholasKluge/Aira-2-portuguese-1B7 | nicholasKluge | 2024-06-18T11:19:57Z | 424 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"pt",
"dataset:nicholasKluge/instruct-aira-dataset",
"license:bigscience-bloom-rail-1.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-06-26T18:48:23Z | ---
license: bigscience-bloom-rail-1.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- pt
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>Me explique o que é Aprendizagem de Máquina?<|endofinstruction|>"
example_title: Aprendizagem de Máquina
- text: "<|startofinstruction|>Você sabe alguma coisa sobre a Ética das Virtudes?<|endofinstruction|>"
example_title: Ética
- text: "<|startofinstruction|>Como eu posso fazer a minha namorada feliz?<|endofinstruction|>"
example_title: Conselho
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 100
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 1990
source: CodeCarbon
training_type: fine-tuning
geographical_location: Singapore
hardware_used: NVIDIA A100-SXM4-40GB
---
# Aira-2-portuguese-1B7
Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-portuguese-1B7 is an instruction-tuned model based on [BLOOM](https://huggingface.co/bigscience/bloom-1b7). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo-Portuguese).
## Details
- **Size:** 1,722,005,504 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** Portuguese
- **Number of Epochs:** 3
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 1.99 KgCO2 (Singapore)
- **Total Energy Consumption:** 4.09 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`O que é um modelo de linguagem?`<|endofinstruction|>`Um modelo de linguagem é uma distribuição de probabilidade sobre um vocabulário.`<|endofcompletion|>`
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-portuguese-1B7')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-portuguese-1B7')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
add_special_tokens=False,
return_tensors="pt").to(device)
responses = aira.generate(**inputs,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7,
num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>> Question: 👤 Qual a capital da Alemanha?
>>>Response 1: 🤖 A capital da Alemanha é Berlim. É a maior cidade da Alemanha e serve como centro administrativo, cultural e político da Alemanha.
>>>Response 2: 🤖 A capital da Alemanha é Berlim. É a maior cidade da Alemanha e serve como centro administrativo, cultural e político da Alemanha.
```
## Limitations
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
Aira-2-portuguese-1B7 is licensed under the RAIL License since it is a model derived from BLOOM. See the [LICENSE](LICENSE) file for more details.
|
keehun/textual_inversion_mpchar-r4 | keehun | 2023-07-19T04:27:00Z | 424 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-19T02:59:59Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - keehun/textual_inversion_mpchar-r4
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
TheBloke/Kimiko-13B-GGUF | TheBloke | 2023-09-27T12:48:22Z | 424 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:1910.09700",
"base_model:nRuaif/Kimiko_13B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-07T08:41:16Z | ---
license: llama2
model_name: Kimiko 13B
base_model: nRuaif/Kimiko_13B
inference: false
model_creator: nRuaif
model_type: llama
prompt_template: '<<HUMAN>>
{prompt}
<<AIBOT>>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Kimiko 13B - GGUF
- Model creator: [nRuaif](https://huggingface.co/nRuaif)
- Original model: [Kimiko 13B](https://huggingface.co/nRuaif/Kimiko_13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [nRuaif's Kimiko 13B](https://huggingface.co/nRuaif/Kimiko_13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kimiko-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-13B-GGUF)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Kimiko-13B-fp16)
* [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko_13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Kimiko
```
<<HUMAN>>
{prompt}
<<AIBOT>>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [kimiko-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [kimiko-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [kimiko-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [kimiko-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [kimiko-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [kimiko-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [kimiko-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [kimiko-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [kimiko-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [kimiko-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [kimiko-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [kimiko-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Kimiko-13B-GGUF/blob/main/kimiko-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Kimiko-13B-GGUF and below it, a specific filename to download, such as: kimiko-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Kimiko-13B-GGUF kimiko-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Kimiko-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Kimiko-13B-GGUF kimiko-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m kimiko-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<<HUMAN>>\n{prompt}\n\n<<AIBOT>>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Kimiko-13B-GGUF", model_file="kimiko-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: nRuaif's Kimiko 13B
# Model Card for Kimiko_13B
<!-- Provide a quick summary of what the model is/does. -->
This is my new Kimiko models, trained with LLaMA2-13B for...purpose
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** nRuaif
- **Model type:** Decoder only
- **License:** CC BY-NC-SA
- **Finetuned from model [optional]:** LLaMA 2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/OpenAccess-AI-Collective/axolotl
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is trained on 3k examples of instructions dataset, high quality roleplay, for best result follow this format
```
<<HUMAN>>
How to do abc
<<AIBOT>>
Here is how
Or with system prompting for roleplay
<<SYSTEM>>
A's Persona:
B's Persona:
Scenario:
Add some instruction here on how you want your RP to go.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
All bias of this model come from LlaMA2 with an exception of NSFW bias.....
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
3000 examples from LIMAERP, LIMA and I sample 1000 good instruction from Airboro
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Model is trained with 1 L4 from GCP costing a whooping 2.5USD
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
3 epochs with 0.0002 lr, full 4096 ctx token, QLoRA
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
It takes 18 hours to train this model with xformers enable
[More Information Needed]
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** L4 with 12CPUs 48gb ram
- **Hours used:** 5
- **Cloud Provider:** GCP
- **Compute Region:** US
- **Carbon Emitted:** 0.5KG
<!-- original-model-card end -->
|
ludis/tsukasa-13b-qlora-limarp-gguf | ludis | 2024-01-06T18:00:30Z | 424 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:PygmalionAI/PIPPA",
"dataset:ludis/geepeetee4",
"dataset:lemonilia/LimaRP",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2023-09-11T06:06:27Z | ---
datasets:
- PygmalionAI/PIPPA
- ludis/geepeetee4
- lemonilia/LimaRP
---
## GGUF
gguf quants for ludis/tsukasa-13b-qlora-limarp
## Prompting
https://rentry.org/tsukasa13b - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (mistral-0.1-7b)
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 4x nvidia a40 gpu cluster.
the a40 GPU cluster has been graciously provided by [Arc Compute](https://www.arccompute.io/).
rank 8 lora tune of mistralai/Mistral-7B-v0.1, first tuned on koishi commit 6e675d1 for one epoch then on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-09-30 for 2 epochs in metharme format
|
TheBloke/CalliopeDS-L2-13B-GGUF | TheBloke | 2023-09-27T12:49:26Z | 424 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"text-generation",
"en",
"arxiv:2306.01708",
"base_model:Doctor-Shotgun/CalliopeDS-L2-13B",
"license:agpl-3.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-16T06:36:51Z | ---
language:
- en
license: agpl-3.0
library_name: transformers
tags:
- llama
- llama-2
model_name: CalliopeDS L2 13B
base_model: Doctor-Shotgun/CalliopeDS-L2-13B
inference: false
model_creator: Doctor Shotgun
model_type: llama
pipeline_tag: text-generation
prompt_template: "### Instruction:\nCharacter's Persona: bot character description\n\
\nUser's persona: user character description\n \nScenario: what happens in the\
\ story\n\nPlay the role of Character. You must engage in a roleplaying chat with\
\ User below this line. Do not write dialogues and narration for User. Character\
\ should respond with messages of medium length.\n\n### Input:\nUser: {prompt}\n\
\n### Response:\nCharacter: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CalliopeDS L2 13B - GGUF
- Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun)
- Original model: [CalliopeDS L2 13B](https://huggingface.co/Doctor-Shotgun/CalliopeDS-L2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Doctor Shotgun's CalliopeDS L2 13B](https://huggingface.co/Doctor-Shotgun/CalliopeDS-L2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF)
* [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/CalliopeDS-L2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: LimaRP-Alpaca
```
### Instruction:
Character's Persona: bot character description
User's persona: user character description
Scenario: what happens in the story
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.
### Input:
User: {prompt}
### Response:
Character:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `agpl-3.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Doctor Shotgun's CalliopeDS L2 13B](https://huggingface.co/Doctor-Shotgun/CalliopeDS-L2-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [calliopeds-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [calliopeds-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [calliopeds-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [calliopeds-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [calliopeds-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [calliopeds-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [calliopeds-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [calliopeds-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [calliopeds-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [calliopeds-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [calliopeds-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [calliopeds-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/CalliopeDS-L2-13B-GGUF/blob/main/calliopeds-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/CalliopeDS-L2-13B-GGUF and below it, a specific filename to download, such as: calliopeds-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/CalliopeDS-L2-13B-GGUF calliopeds-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/CalliopeDS-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CalliopeDS-L2-13B-GGUF calliopeds-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m calliopeds-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\nCharacter's Persona: bot character description\n\nUser's persona: user character description\n \nScenario: what happens in the story\n\nPlay the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.\n\n### Input:\nUser: {prompt}\n\n### Response:\nCharacter:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/CalliopeDS-L2-13B-GGUF", model_file="calliopeds-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Doctor Shotgun's CalliopeDS L2 13B
# Model Card: CalliopeDS-L2-13B
This is a Llama 2-based model consisting of a merge of several models using a weight-adjusted TIES merge ([Resolving Interference When Merging Models](https://arxiv.org/abs/2306.01708)):
- [jondurbin/airoboros-l2-13b-2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
- [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
- [lemonilia/limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2)
- [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
Charles Goddard's [mergekit](https://github.com/cg123/mergekit) repo was used to perform these operations.
The purpose of this merge was to create a model that excels at creative writing and roleplay while maintaining general intelligence and instruction-following capabilities. In testing, it has shown to be capable at producing descriptive and verbose responses while demonstrating a solid understanding of the context.
## Usage:
Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format of the LIMARP v2:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.
### Input:
User: {utterance}
### Response:
Character: {utterance}
```
Or the Pygmalion/Metharme format:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model was also tested using a system prompt with no instruction sequences:
```
Write Character's next reply in the roleplay between User and Character. Stay in character and write creative responses that move the scenario forward. Narrate in detail, using elaborate descriptions. The following is your persona:
{{persona}}
[Current conversation]
User: {utterance}
Character: {utterance}
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
<!-- original-model-card end -->
|
mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-gguf | mmnga | 2024-02-10T05:29:57Z | 424 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-22T06:23:51Z | ---
license: apache-2.0
---
# llm-jp-13b-instruct-full-dolly-oasst-v1.0-gguf
[llm-jpさんが公開しているllm-jp-13b-instruct-full-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-oasst-v1.0)のggufフォーマット変換版です。
**jasterなし**です
モデル一覧
[mmnga/llm-jp-13b-v1.0-4bit-g128-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/llm-jp-13b-v1.0-4bit-g128-GPTQ-calib-ja-1k)
[mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-GPTQ-calib-ja-1k)
[mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-GPTQ-calib-ja-1k)
GGUF版
[mmnga/llm-jp-13b-instruct-dolly-en-ja-oasst-v1.1-gguf](https://huggingface.co/mmnga/llm-jp-13b-instruct-dolly-en-ja-oasst-v1.1-gguf)
[mmnga/llm-jp-13b-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-13b-v1.0-gguf)
[mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-gguf)
[mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-gguf)
[mmnga/llm-jp-1.3b-v1.0-gguf](https://huggingface.co/mmnga/llm-jp-1.3b-v1.0-gguf)
## Convert Script
[変換スクリプトはこちら](https://gist.github.com/mmnga/bcde6bab59132682307112fef0472b80#file-llm-jp_convert-hf-to-gguf-py)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'llm-jp-13b-instruct-full-dolly-oasst-v1.0-q4_0.gguf' -n 128 -p '今日の夕食のレシピを教えて ### 回答:' --top_p 0.9 --temp 0.7 --repeat-penalty 1.2
```
|
maddes8cht/mosaicml-mpt-7b-gguf | maddes8cht | 2023-11-01T15:36:58Z | 424 | 0 | null | [
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-25T13:10:11Z | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# mpt-7b - GGUF
- Model creator: [mosaicml](https://huggingface.co/mosaicml)
- Original model: [mpt-7b](https://huggingface.co/mosaicml/mpt-7b)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
---
# Brief
MPT-7B and MPT-30B are the Base models of the MPT Family.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_
## Model Date
May 5, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-05-05},
urldate = {2023-05-05}
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
TheBloke/Echidna-13B-v0.2-GGUF | TheBloke | 2023-10-28T09:12:21Z | 424 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:NeverSleep/Echidna-13b-v0.2",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-10-28T08:08:35Z | ---
base_model: NeverSleep/Echidna-13b-v0.2
inference: false
license: cc-by-nc-4.0
model_creator: NeverSleep
model_name: Echidna 13B v0.2
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Echidna 13B v0.2 - GGUF
- Model creator: [NeverSleep](https://huggingface.co/NeverSleep)
- Original model: [Echidna 13B v0.2](https://huggingface.co/NeverSleep/Echidna-13b-v0.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [NeverSleep's Echidna 13B v0.2](https://huggingface.co/NeverSleep/Echidna-13b-v0.2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Echidna-13B-v0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF)
* [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Echidna-13b-v0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's Echidna 13B v0.2](https://huggingface.co/NeverSleep/Echidna-13b-v0.2).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [echidna-13b-v0.2.Q2_K.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [echidna-13b-v0.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [echidna-13b-v0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [echidna-13b-v0.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [echidna-13b-v0.2.Q4_0.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [echidna-13b-v0.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [echidna-13b-v0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [echidna-13b-v0.2.Q5_0.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [echidna-13b-v0.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [echidna-13b-v0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [echidna-13b-v0.2.Q6_K.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [echidna-13b-v0.2.Q8_0.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.2-GGUF/blob/main/echidna-13b-v0.2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Echidna-13B-v0.2-GGUF and below it, a specific filename to download, such as: echidna-13b-v0.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Echidna-13B-v0.2-GGUF echidna-13b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Echidna-13B-v0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Echidna-13B-v0.2-GGUF echidna-13b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m echidna-13b-v0.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Echidna-13B-v0.2-GGUF", model_file="echidna-13b-v0.2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: NeverSleep's Echidna 13B v0.2

# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Echidna v0.2 model. Use Alpaca format. Suitable for RP, ERP and general stuff.
Echidna v0.3 will be a heavily refined version of this model/recipe!
This model seems to be pretty sensitive to your generation settings, experiment until you've found your settings.
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of Echidna-13b-v0.2.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.2)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2 - by waldie](https://huggingface.co/waldie/Athena-v4-8bpw-h8-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.2-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- description start -->
## Models+loras used and recipe
- Xwin-LM/Xwin-LM-13B-V0.2
- IkariDev/Athena-v3
- Heralax/Cat-0.5
- Undi95/PsyMedRP-v1-13B
- cgato/Thespis-13b-v0.4
- KoboldAI/LLaMA2-13B-Tiefighter
- Heralax/Augmental-13b-two-epochs
- Sao10K/SthenoWriter2.1-L2-13B
- Undi95/Storytelling-v2.1-13B-lora
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
<!-- original-model-card end -->
|
wasmedge/llama2 | wasmedge | 2023-11-11T23:00:33Z | 424 | 8 | null | [
"gguf",
"meta",
"llama-2",
"wasmedge",
"second-state",
"llama.cpp",
"text-generation",
"en",
"license:llama2",
"region:us"
]
| text-generation | 2023-10-28T22:12:08Z | ---
language:
- en
license: llama2
tags:
- meta
- llama-2
- wasmedge
- second-state
- llama.cpp
model_name: Llama 2 GGUF
inference: false
model_creator: Meta Llama 2
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: wasmedge
---
This repo contains GGUF model files for cross-platform AI inference using the [WasmEdge Runtime](https://github.com/WasmEdge/WasmEdge).
[Learn more](https://medium.com/stackademic/fast-and-portable-llama2-inference-on-the-heterogeneous-edge-a62508e82359) on why and how.
## Prerequisite
Install WasmEdge with the GGML plugin.
```
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugin wasi_nn-ggml
```
Download the cross-platform Wasm apps for inference.
```
curl -LO https://github.com/second-state/llama-utils/raw/main/simple/llama-simple.wasm
curl -LO https://github.com/second-state/llama-utils/raw/main/chat/llama-chat.wasm
```
## Use the quantized models
The `q5_k_m` version is a quantized version of the llama2 models. They are only half of the size of the original models, and hence consume half as much VRAM, but still give high-quality inference results.
Chat with the 7b chat model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-q5_k_m.gguf llama-chat.wasm
```
Generate text with the 7b base model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-q5_k_m.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is '
```
Chat with the 13b chat model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-chat-q5_k_m.gguf llama-chat.wasm
```
Generate text with the 13b base model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-q5_k_m.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is '
```
## Use the f16 models
The f16 version is the GGUF equivalent of the original llama2 models. It gives the best quality inference results but also consumes the most computing resources in both VRAM and computing time. The f16 models are also great as a basis for fine-tuning.
Chat with the 7b chat model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-f16.gguf llama-chat.wasm
```
Generate text with the 7b base model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-f16.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is '
```
Chat with the 13b chat model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-chat-f16.gguf llama-chat.wasm
```
Generate text with the 13b base model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-f16.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is '
```
## Resource constrained models
The `q2_k` version is the smallest quantized version of the llama2 models. They can run on devices with only 4GB of RAM, but the inference quality is rather low.
Chat with the 7b chat model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-q2_k.gguf llama-chat.wasm
```
Generate text with the 7b base model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-q2_k.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is '
```
Chat with the 13b chat model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-chat-q2_k.gguf llama-chat.wasm
```
Generate text with the 13b base model
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-q2_k.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is '
```
|
chris32/living-spaces-classification_scraper_balanced_clean | chris32 | 2023-11-27T01:40:23Z | 424 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-27T01:40:18Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: living-spaces-classification_scraper_balanced_clean
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9592592716217041
---
# living-spaces-classification_scraper_balanced_clean
House & Apartaments Classification model🤗🖼️
## Example Images
#### Exterior

#### Interior

#### bathroom

#### bedroom

#### closets

#### dining_room

#### kitchen

#### living_room

#### others
 |
TheBloke/Iambe-Storyteller-20B-GGUF | TheBloke | 2023-12-08T21:28:03Z | 424 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:athirdpath/Iambe-Storyteller-20b",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-08T21:16:55Z | ---
base_model: athirdpath/Iambe-Storyteller-20b
inference: false
license: cc-by-nc-4.0
model_creator: Raven
model_name: Iambe Storyteller 20B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Iambe Storyteller 20B - GGUF
- Model creator: [Raven](https://huggingface.co/athirdpath)
- Original model: [Iambe Storyteller 20B](https://huggingface.co/athirdpath/Iambe-Storyteller-20b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Raven's Iambe Storyteller 20B](https://huggingface.co/athirdpath/Iambe-Storyteller-20b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF)
* [Raven's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/athirdpath/Iambe-Storyteller-20b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Raven's Iambe Storyteller 20B](https://huggingface.co/athirdpath/Iambe-Storyteller-20b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [iambe-storyteller-20b.Q2_K.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q2_K.gguf) | Q2_K | 2 | 8.31 GB| 10.81 GB | smallest, significant quality loss - not recommended for most purposes |
| [iambe-storyteller-20b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q3_K_S.gguf) | Q3_K_S | 3 | 8.66 GB| 11.16 GB | very small, high quality loss |
| [iambe-storyteller-20b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q3_K_M.gguf) | Q3_K_M | 3 | 9.70 GB| 12.20 GB | very small, high quality loss |
| [iambe-storyteller-20b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q3_K_L.gguf) | Q3_K_L | 3 | 10.63 GB| 13.13 GB | small, substantial quality loss |
| [iambe-storyteller-20b.Q4_0.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q4_0.gguf) | Q4_0 | 4 | 11.29 GB| 13.79 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [iambe-storyteller-20b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q4_K_S.gguf) | Q4_K_S | 4 | 11.34 GB| 13.84 GB | small, greater quality loss |
| [iambe-storyteller-20b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q4_K_M.gguf) | Q4_K_M | 4 | 12.04 GB| 14.54 GB | medium, balanced quality - recommended |
| [iambe-storyteller-20b.Q5_0.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q5_0.gguf) | Q5_0 | 5 | 13.77 GB| 16.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [iambe-storyteller-20b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q5_K_S.gguf) | Q5_K_S | 5 | 13.77 GB| 16.27 GB | large, low quality loss - recommended |
| [iambe-storyteller-20b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q5_K_M.gguf) | Q5_K_M | 5 | 14.16 GB| 16.66 GB | large, very low quality loss - recommended |
| [iambe-storyteller-20b.Q6_K.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q6_K.gguf) | Q6_K | 6 | 16.40 GB| 18.90 GB | very large, extremely low quality loss |
| [iambe-storyteller-20b.Q8_0.gguf](https://huggingface.co/TheBloke/Iambe-Storyteller-20B-GGUF/blob/main/iambe-storyteller-20b.Q8_0.gguf) | Q8_0 | 8 | 21.25 GB| 23.75 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Iambe-Storyteller-20B-GGUF and below it, a specific filename to download, such as: iambe-storyteller-20b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Iambe-Storyteller-20B-GGUF iambe-storyteller-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Iambe-Storyteller-20B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Iambe-Storyteller-20B-GGUF iambe-storyteller-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m iambe-storyteller-20b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./iambe-storyteller-20b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./iambe-storyteller-20b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Raven's Iambe Storyteller 20B
<p align="center"><font size="7"> <b>Dumb assistant, 🔥 author.</b></font></p>
<p align="center"><font size="5"> <b>4-bit Examples with Alpaca</b></font></p>
<p align="center"><img src="https://i.postimg.cc/VNXpwDCY/Screenshot-2023-12-06-212944.png"/>
<p align="center"><img src="https://i.ibb.co/TPDsD2z/Screenshot-2023-12-06-211216.png"/>
<p align="center"><font size="5"><b><a href="https://i.ibb.co/4PhnXjK/Screenshot-2023-12-06-205831.png">!!NSFW!! - 🔥Erotica Writing Example🔥 - !!NSFW!!</font></a></b></p>
Thanks to [Charles Goddard](https://huggingface.co/chargoddard) for the recipe.
The idea here is to "move" Iambe from being based on vanilla L2 to being based on sequelbox/DynamicFactor instead.
Because task_arithmetic uses the raw deltas, this should be similar to if the SFT had been done over DynamicFactor.
## Recipe
merge_method: task_arithmetic
base_model: athirdpath/BigLlama-20b-v1.1 # Base model you want to "move out" from
models:
- model: athirdpath/Iambe-20b-DARE-v2 # SFTd model you want to transfer
- model: athirdpath/DoubleFactor-20b # Base model you want to "move in" to
parameters:
- weight: 1.0
dtype: bfloat16
<!-- original-model-card end -->
|
dynamofl/dynamo-8B-v0.1 | dynamofl | 2024-02-13T00:09:45Z | 424 | 18 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"de",
"es",
"ko",
"tr",
"it",
"dataset:uonlp/CulturaX",
"dataset:togethercomputer/RedPajama-Data-V2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-21T18:41:07Z | ---
language:
- en
- de
- es
- ko
- tr
- it
datasets:
- uonlp/CulturaX
- togethercomputer/RedPajama-Data-V2
---
<div align="center" style="border-radius: 8px; overflow: hidden;">
<img src="DynamoGuard_Banner.png" alt="DynamoFL | Secure & Compliant AI for the Enterprise" style="border-radius: 8px;">
</div>
# Dynamo 8B Model Card
Dynamo 8B is an improvement of the Mistral 7B architecture for the purpose of multilingual language modeling. Dynamo 8B outperforms Mistral 7B, Llama2 13B, Bloom 7B, and PolyLM 13B on most multilingual benchmarks we tested (i.e. PAWS and XCOPA). For additional details, please refer to our [blog post](https://www.dynamofl.com/blogs/introducing-dynamo-8b-a-multilingual-foundation-model-for-global-enterprises).
It includes an extended tokenizer that was pretrained to better leverage tokens in different languages. The tokenizer was extended by training a sentence BPE tokenizer on selected languages (200M tokens were used per language) and then combined the merges/vocab that were not already present in the Mistral tokenizer. After the tokenizers were merged, the model was pretrained with an additional 210B tokens from multilingual data like German, Spanish, Korean, Italian, and Turkish texts. The pretraining dataset also incorporated English tokens to mitigate catastrophic forgetting.
Dynamo 8B has not been instruction fine-tuned and has not undergone alignment using techniques like reinforcement learning from human feedback. The intention behind crafting this model is to provide the research community with a model to explore vital multilingual capabilities that enable widespread use of LLMs globally.
# Model Specifications:
- Supported Languages: English, German, Spanish, Korean, Italian, Turkish.
- Context Window: 128K tokens*
- License: At the moment, Dynamo 8B is released under a [DynamoFL research-only license](https://huggingface.co/dynamofl/dynamoLLM-8.27B/blob/main/custom-license 'DynamoFL Research License').
*Pretraining on the multilingual dataset was done with a sequence length of 4096 tokens
# Evaluation Results:
In our recent evaluation, we used several multilingual benchmarks to assess our model's capabilities. These benchmarks included PAWS, XCOPA, and xstorycloze, all part of EleutherAI's evaluation harness. All runs were done in 32bit precision. Here is an in-depth description of each benchmark we used:
| Multilingual Benchmark | Language | Dynamo 8B | Mistral 7B | Llama2 13B | Bloom 7B | PolyLM 13B |
|------------------------|----------|-----------|------------|------------|----------|------------|
| PAWS | German | **0.516** | 0.363 | 0.377 | 0.502 | 0.390 |
| PAWS | English | **0.497** | 0.311 | 0.336 | 0.422 | 0.413 |
| PAWS | Spanish | **0.515** | 0.339 | 0.422 | 0.424 | 0.452 |
| PAWS | Korean | **0.552** | 0.422 | 0.534 | **0.551** | 0.544 |
| XCOPA | Italian | **0.710** | 0.63 | 0.692 | 0.516 | 0.644 |
| XCOPA | Turkish | **0.672** | 0.562 | 0.550 | 0.520 | 0.574 |
| xstorycloze | Spanish | **0.645** | 0.632 | 0.622 | 0.639 | **0.642** |
# Training Data:
This model was trained on a variety of datasets including information from an mC4 database which is made available under the ODC Attribution License. This includes the CulturaX dataset and RedPajama.
# Notice
Dynamo 8B is a pre-trained model that can be adapted and fine-tuned for a variety of tasks. However, it is new technology that carries risk. In some scenarios, it may generate inaccurate, unverified, or biased output despite efforts we have made to maximize model safety. As with all LLMs, we recommend users exercise critical thinking, validate outputs, and perform the requisite safety evaluations for specific downstream applications of the Dynamo model. We also require any use or deployment of the model to be in adherence with our Acceptable Use Policy.(https://www.dynamofl.com/legal/acceptable-use-policy) |
MagusCorp/legislinho | MagusCorp | 2024-04-14T07:36:01Z | 424 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"pt",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-02-08T15:37:14Z | ---
language:
- pt
license: apache-2.0
model-index:
- name: legislinho
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 63.05
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 51.04
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 43.23
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 88.7
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 67.76
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 63.8
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 72.64
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 65.63
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 56.52
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MagusCorp/legislinho
name: Open Portuguese LLM Leaderboard
---
**Nome do Modelo:** Legislinho
Teste no [▶️ Colab](https://colab.research.google.com/drive/1xeGhYTUt19TJgq0tn5LAHogEfumAJ3vj?usp=sharing)
Nunca usou Google Colab? Está confuso(a)? Siga esse tutórial: [Tutotial de Como Usar Colab](https://drive.google.com/file/d/1NUXsG3A-o4octv0d7JZis-pL_w8x-atA/view?usp=sharing)
**Versão:** v0
**Descrição:** O Legislinho é um modelo de Inteligência Artificial (AI) em sua versão inicial, não pronto para produção. Ele funciona como um primeira consulta em assuntos relacionados à legislação brasileira. Foi treinado a partir do OpenHermes no vade mecum da legislação brasileira e further fine-tunned (QLoRA) em um conjunto selecionado de perguntas e respostas sobre legislação brasileira.
**Aviso:** O Legislinho não substitui de forma alguma um advogado. Seu propósito é ser um guia, um norte para pessoas que se sentem perdidas no âmbito da legislação brasileira. As respostas fornecidas pelo modelo não constituem uma consulta jurídica real.
**Dados de Treinamento:** O modelo foi treinado usando o OpenHermes no vade mecum da legislação brasileira e ajustado com um conjunto selecionado de perguntas e respostas sobre legislação brasileira.
**Características do Modelo:** O Legislinho é capaz de responder a perguntas simples sobre legislação brasileira, fornecendo informações básicas e orientações gerais.
**Limitações:** Como o modelo está em sua versão inicial, suas respostas podem não ser sempre precisas ou completas. Além disso, o Legislinho não está equipado para lidar com casos complexos ou específicos que exigem conhecimento legal especializado.
**Considerações Éticas:** O uso do Legislinho deve ser feito com responsabilidade, entendendo que suas respostas são apenas orientações gerais e não substituem uma consulta jurídica profissional.
**Uso Recomendado:** O Legislinho pode ser usado por indivíduos que procuram uma compreensão básica de assuntos legais no Brasil. É recomendável que os usuários procurem orientação legal profissional para situações específicas ou complexas.
**Inferencia:** Para executar inferencia se necessita ter os seguintes pacotes instalados:
```
(pip install / poetry add)peft accelerate bitsandbytes safetensors transformers
```
Códico completo para inferência:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('MagusCorp/legislinho')
tokenizer = AutoTokenizer.from_pretrained('MagusCorp/legislinho')
preprompt = 'Você é um agente de auxílio jurídico que não repete palavras de forma redundante e redige de forma clara concisa, como alguém formado em direito. Você não cita leis de forma alguma apenas procedimentos legais e operações. Você apenas é uma primeira consulta antes da pessoa procurar um advogado. Utilize todo o seu conhecimento da constituição brasileira para responder. Não repita a mesma informação em sua resposta. Seja assertivo. Atente-se a pergunta e elabore uma estratégia de solução do problema contemplando toda a pergunta, tal como faz um advogado. Pergunta:\n'
pospromt = "\nResposta:"
prompt = 'Minha mae vendeu a casa dela sem consultar nem um dos filhos ela pode fazer isso?'
inputs = tokenizer(preprompt+prompt+pospromt, return_tensors='pt').to('cuda')
outputs = model.generate(**inputs,max_new_tokens=1000)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer.split(prompt)[1])
```
**Avaliação:** O modelo ainda não foi avaliado em um ambiente de produção.
**Contato:** Para mais informações ou dúvidas sobre o Legislinho, entre em contato conosco. [email]([email protected])
Se quiser ajudar o desenvolvimento do Legislinho e outros projetos no forno. Considere doar:
<a href='https://ko-fi.com/maguscorp' target='_blank'><img height='35' style='border:0px;height:46px;' src='https://az743702.vo.msecnd.net/cdn/kofi3.png?v=0' border='0' alt='Buy Me a Coffee at ko-fi.com'/>
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/MagusCorp/legislinho) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|--------|
|Average |**63.6**|
|ENEM Challenge (No Images)| 63.05|
|BLUEX (No Images) | 51.04|
|OAB Exams | 43.23|
|Assin2 RTE | 88.70|
|Assin2 STS | 67.76|
|FaQuAD NLI | 63.80|
|HateBR Binary | 72.64|
|PT Hate Speech Binary | 65.63|
|tweetSentBR | 56.52|
|
RunDiffusion/Juggernaut-XL | RunDiffusion | 2024-03-11T20:05:53Z | 424 | 1 | diffusers | [
"diffusers",
"art",
"people",
"diffusion",
"Cinematic",
"Photography",
"Landscape",
"Interior",
"Food",
"Car",
"Wildlife",
"Architecture",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-02-22T00:43:28Z | ---
language:
- en
license: creativeml-openrail-m
library_name: diffusers
tags:
- art
- people
- diffusion
- Cinematic
- Photography
- Landscape
- Interior
- Food
- Car
- Wildlife
- Architecture
thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/def40db4-42d0-4e45-2baf-bebed29ae000/padthumb
base_model: stabilityai/stable-diffusion-xl-base-1.0
---
# Juggernaut XL v2 Official


## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9)
Version 2 is technically the best version from the first four versions and should be used.
This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation.
Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)
For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]
|
MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF | MaziyarPanahi | 2024-03-15T11:00:55Z | 424 | 2 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"generated_from_trainer",
"7b",
"calme",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.2"
]
| text-generation | 2024-03-15T10:36:44Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- generated_from_trainer
- 7b
- calme
- conversational
- license:apache-2.0
- autotrain_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Calme-7B-Instruct-v0.2-GGUF
base_model: MaziyarPanahi/Calme-7B-Instruct-v0.2
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Calme-7B-Instruct-v0.2](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2)
## Description
[MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF) contains GGUF format model files for [MaziyarPanahi/Calme-7B-Instruct-v0.2](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF) and below it, a specific filename to download, such as: Calme-7B-Instruct-v0.2-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF Calme-7B-Instruct-v0.2-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Calme-7B-Instruct-v0.2-GGUF Calme-7B-Instruct-v0.2-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Calme-7B-Instruct-v0.2-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Calme-7B-Instruct-v0.2-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Calme-7B-Instruct-v0.2-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
mradermacher/NeuralBeagle-RP-GGUF | mradermacher | 2024-05-06T06:04:31Z | 424 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:ChavyvAkvar/NeuralBeagle-RP",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-22T08:12:36Z | ---
base_model: ChavyvAkvar/NeuralBeagle-RP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
static quants of https://huggingface.co/ChavyvAkvar/NeuralBeagle-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralBeagle-RP-GGUF/resolve/main/NeuralBeagle-RP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Capybara-Tess-Yi-34B-200K-GGUF | mradermacher | 2024-06-25T14:04:45Z | 424 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"base_model:brucethemoose/Capybara-Tess-Yi-34B-200K",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-04T21:39:44Z | ---
base_model: brucethemoose/Capybara-Tess-Yi-34B-200K
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
unography/blip-large-long-cap | unography | 2024-04-16T09:51:21Z | 424 | 4 | transformers | [
"transformers",
"safetensors",
"blip",
"text2text-generation",
"image-captioning",
"image-to-text",
"dataset:unography/laion-14k-GPT4V-LIVIS-Captions",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-to-text | 2024-04-16T08:30:03Z | ---
pipeline_tag: image-to-text
tags:
- image-captioning
languages:
- en
license: bsd-3-clause
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
datasets:
- unography/laion-14k-GPT4V-LIVIS-Captions
inference:
parameters:
max_length: 300
---
# LongCap: Finetuned [BLIP](https://huggingface.co/Salesforce/blip-image-captioning-large) for generating long captions of images, suitable for prompts for text-to-image generation and captioning text-to-image datasets
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap")
model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
inputs = processor(raw_image, return_tensors="pt")
pixel_values = inputs.pixel_values
out = model.generate(pixel_values=pixel_values, max_length=250)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue.
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap")
model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
inputs = processor(raw_image, return_tensors="pt").to("cuda")
pixel_values = inputs.pixel_values
out = model.generate(pixel_values=pixel_values, max_length=250)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue.
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap")
model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
pixel_values = inputs.pixel_values
out = model.generate(pixel_values=pixel_values, max_length=250)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue.
```
</details> |
mradermacher/SweetNeural-7B-GGUF | mradermacher | 2024-05-15T16:03:32Z | 424 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:MisterRaven006/SweetNeural-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-15T14:59:34Z | ---
base_model: MisterRaven006/SweetNeural-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/MisterRaven006/SweetNeural-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SweetNeural-7B-GGUF/resolve/main/SweetNeural-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf | RichardErkhov | 2024-05-27T04:42:24Z | 424 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-26T08:20:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tess-2.0-Llama-3-70B-v0.2 - GGUF
- Model creator: https://huggingface.co/migtissera/
- Original model: https://huggingface.co/migtissera/Tess-2.0-Llama-3-70B-v0.2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Tess-2.0-Llama-3-70B-v0.2.Q2_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.Q2_K.gguf) | Q2_K | 24.56GB |
| [Tess-2.0-Llama-3-70B-v0.2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [Tess-2.0-Llama-3-70B-v0.2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [Tess-2.0-Llama-3-70B-v0.2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q3_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.Q3_K.gguf) | Q3_K | 31.91GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [Tess-2.0-Llama-3-70B-v0.2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q4_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/blob/main/Tess-2.0-Llama-3-70B-v0.2.Q4_0.gguf) | Q4_0 | 37.22GB |
| [Tess-2.0-Llama-3-70B-v0.2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q4_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q4_K | 39.6GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q4_1.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q4_1 | 41.27GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q5_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q5_0 | 45.32GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q5_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q5_K | 46.52GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q5_1.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q5_1 | 49.36GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q6_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q6_K | 53.91GB |
| [Tess-2.0-Llama-3-70B-v0.2.Q8_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Llama-3-70B-v0.2-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
license: llama3
---

# Tess-2.0-Llama-3-70B-v0.2
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Llama-3-70B-v0.2 was trained on the meta-llama/Meta-Llama-3-70B base.
The change between v0.1 and this version, v0.2 is that v0.2 has undergone an additional step of uncensoring.
# Prompt Format
Prompt format used for this fine-tune is Llama-3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Who are you?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
I am an AI<|eot_id|><|start_header_id|>user<|end_header_id|>
What's your name?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
# Training Methodology
Tess-2.0-Llama-3 was trained on the (still curating) Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~100K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.
The model was only fine-tuned for 2-epochs with a low learning rate to try and preserve its entropy as much as possible. In v0.2, an uncensored dataset was also added to the mix.
# Sample code to run inference
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "/home/migel/Tess-2.0-Llama-3-70B-v0.2"
output_file_path = "/home/migel/conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
trust_remote_code=False,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
conversation = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are Tesoro, a helful AI assitant. You always provide detailed answers without hesitation. Do not refuse.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"""
while True:
user_input = input("You: ")
llm_prompt = f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
json_data = {"prompt": user_input, "answer": answer}
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
# Join My General AI Discord (NeuroLattice):
https://discord.gg/Hz6GrwGFKD
# Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
|
CHE-72/TAIDE-LX-7B-Chat-Q5_0-GGUF | CHE-72 | 2024-06-22T17:21:53Z | 424 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
]
| null | 2024-06-22T17:21:32Z | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: 您需要先同意授權條款才能使用此模型
extra_gated_fields:
姓名(Name): text
生日(Date of birth): date_picker
國家(Country): country
所屬單位(Affiliation): text
geo: ip_location
按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: 送出(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q5_0-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_0-GGUF --hf-file taide-lx-7b-chat-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_0-GGUF --hf-file taide-lx-7b-chat-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_0-GGUF --hf-file taide-lx-7b-chat-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_0-GGUF --hf-file taide-lx-7b-chat-q5_0.gguf -c 2048
```
|
LeroyDyer/LCARS_AI_014-Q4_K_M-GGUF | LeroyDyer | 2024-06-29T08:00:33Z | 424 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"chemistry",
"biology",
"legal",
"art",
"music",
"finance",
"code",
"medical",
"not-for-all-audiences",
"merge",
"climate",
"chain-of-thought",
"tree-of-knowledge",
"forest-of-thoughts",
"visual-spacial-sketchpad",
"alpha-mind",
"knowledge-graph",
"entity-detection",
"encyclopedia",
"wikipedia",
"stack-exchange",
"Reddit",
"Cyber-series",
"MegaMind",
"Cybertron",
"SpydazWeb",
"Spydaz",
"LCARS",
"star-trek",
"mega-transformers",
"Mulit-Mega-Merge",
"Multi-Lingual",
"Afro-Centric",
"African-Model",
"Ancient-One",
"llama-cpp",
"gguf-my-repo",
"en",
"sw",
"ig",
"so",
"es",
"ca",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"dataset:abacusai/ARC_DPO_FewShot",
"dataset:abacusai/MetaMath_DPO_FewShot",
"dataset:abacusai/HellaSwag_DPO_FewShot",
"dataset:HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset",
"dataset:HuggingFaceFW/fineweb",
"dataset:occiglot/occiglot-fineweb-v0.5",
"dataset:omi-health/medical-dialogue-to-soap-summary",
"dataset:keivalya/MedQuad-MedicalQnADataset",
"dataset:ruslanmv/ai-medical-dataset",
"dataset:Shekswess/medical_llama3_instruct_dataset_short",
"dataset:ShenRuililin/MedicalQnA",
"dataset:virattt/financial-qa-10K",
"dataset:PatronusAI/financebench",
"dataset:takala/financial_phrasebank",
"dataset:Replete-AI/code_bagel",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:IlyaGusev/gpt_roleplay_realm",
"dataset:rickRossie/bluemoon_roleplay_chat_data_300k_messages",
"dataset:jtatman/hypnosis_dataset",
"dataset:Hypersniper/philosophy_dialogue",
"dataset:Locutusque/function-calling-chatml",
"dataset:bible-nlp/biblenlp-corpus",
"dataset:DatadudeDev/Bible",
"dataset:Helsinki-NLP/bible_para",
"dataset:HausaNLP/AfriSenti-Twitter",
"dataset:aixsatoshi/Chat-with-cosmopedia",
"dataset:HuggingFaceTB/cosmopedia-100k",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:heliosbrahma/mental_health_chatbot_dataset",
"base_model:LeroyDyer/LCARS_AI_014",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T08:00:15Z | ---
base_model: LeroyDyer/LCARS_AI_014
datasets:
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- Open-Orca/SlimOrca
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
- abacusai/ARC_DPO_FewShot
- abacusai/MetaMath_DPO_FewShot
- abacusai/HellaSwag_DPO_FewShot
- HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
- HuggingFaceFW/fineweb
- occiglot/occiglot-fineweb-v0.5
- omi-health/medical-dialogue-to-soap-summary
- keivalya/MedQuad-MedicalQnADataset
- ruslanmv/ai-medical-dataset
- Shekswess/medical_llama3_instruct_dataset_short
- ShenRuililin/MedicalQnA
- virattt/financial-qa-10K
- PatronusAI/financebench
- takala/financial_phrasebank
- Replete-AI/code_bagel
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
- IlyaGusev/gpt_roleplay_realm
- rickRossie/bluemoon_roleplay_chat_data_300k_messages
- jtatman/hypnosis_dataset
- Hypersniper/philosophy_dialogue
- Locutusque/function-calling-chatml
- bible-nlp/biblenlp-corpus
- DatadudeDev/Bible
- Helsinki-NLP/bible_para
- HausaNLP/AfriSenti-Twitter
- aixsatoshi/Chat-with-cosmopedia
- HuggingFaceTB/cosmopedia-100k
- HuggingFaceFW/fineweb-edu
- m-a-p/CodeFeedback-Filtered-Instruction
- heliosbrahma/mental_health_chatbot_dataset
language:
- en
- sw
- ig
- so
- es
- ca
license: apache-2.0
metrics:
- accuracy
- bertscore
- bleu
- brier_score
- cer
- character
- charcut_mt
- chrf
- code_eval
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- chemistry
- biology
- legal
- art
- music
- finance
- code
- medical
- not-for-all-audiences
- merge
- climate
- chain-of-thought
- tree-of-knowledge
- forest-of-thoughts
- visual-spacial-sketchpad
- alpha-mind
- knowledge-graph
- entity-detection
- encyclopedia
- wikipedia
- stack-exchange
- Reddit
- Cyber-series
- MegaMind
- Cybertron
- SpydazWeb
- Spydaz
- LCARS
- star-trek
- mega-transformers
- Mulit-Mega-Merge
- Multi-Lingual
- Afro-Centric
- African-Model
- Ancient-One
- llama-cpp
- gguf-my-repo
---
# LeroyDyer/LCARS_AI_014-Q4_K_M-GGUF
This model was converted to GGUF format from [`LeroyDyer/LCARS_AI_014`](https://huggingface.co/LeroyDyer/LCARS_AI_014) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/LCARS_AI_014) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo LeroyDyer/LCARS_AI_014-Q4_K_M-GGUF --hf-file lcars_ai_014-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo LeroyDyer/LCARS_AI_014-Q4_K_M-GGUF --hf-file lcars_ai_014-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo LeroyDyer/LCARS_AI_014-Q4_K_M-GGUF --hf-file lcars_ai_014-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo LeroyDyer/LCARS_AI_014-Q4_K_M-GGUF --hf-file lcars_ai_014-q4_k_m.gguf -c 2048
```
|
TsinghuaAI/CPM-Generate | TsinghuaAI | 2021-07-29T19:03:51Z | 423 | 40 | transformers | [
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"cpm",
"zh",
"arxiv:2012.00413",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language:
- zh
tags:
- cpm
license: mit
datasets:
- 100GB Chinese corpus
---
# CPM-Generate
## Model description
CPM (Chinese Pre-trained Language Model) is a Transformer-based autoregressive language model, with 2.6 billion parameters and 100GB Chinese training data. To the best of our knowledge, CPM is the largest Chinese pre-trained language model, which could facilitate downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. [[Project](https://cpm.baai.ac.cn)] [[Model](https://cpm.baai.ac.cn/download.html)] [[Paper](https://arxiv.org/abs/2012.00413)]
## Intended uses & limitations
#### How to use
```python
from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("TsinghuaAI/CPM-Generate")
model = AutoModelWithLMHead.from_pretrained("TsinghuaAI/CPM-Generate")
text_generator = TextGenerationPipeline(model, tokenizer)
text_generator('清华大学', max_length=50, do_sample=True, top_p=0.9)
```
#### Limitations and bias
The text generated by CPM is automatically generated by a neural network model trained on a large number of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by CPM is only used for technical and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it, but contact the authors and the authors will deal with it promptly.
## Training data
We collect different kinds of texts in our pre-training, including encyclopedia, news, novels, and Q\&A. The details of our training data are shown as follows.
| Data Source | Encyclopedia | Webpage | Story | News | Dialog |
| ----------- | ------------ | ------- | ----- | ----- | ------ |
| **Size** | ~40GB | ~39GB | ~10GB | ~10GB | ~1GB |
## Training procedure
Based on the hyper-parameter searching on the learning rate and batch size, we set the learning rate as \\(1.5\times10^{-4}\\) and the batch size as \\(3,072\\), which makes the model training more stable. In the first version, we still adopt the dense attention and the max sequence length is \\(1,024\\). We will implement sparse attention in the future. We pre-train our model for \\(20,000\\) steps, and the first \\(5,000\\) steps are for warm-up. The optimizer is Adam. It takes two weeks to train our largest model using \\(64\\) NVIDIA V100.
## Eval results
| | n_param | n_layers | d_model | n_heads | d_head |
|------------|-------------------:|--------------------:|-------------------:|-------------------:|------------------:|
| CPM-Small | 109M | 12 | 768 | 12 | 64 |
| CPM-Medium | 334M | 24 | 1,024 | 16 | 64 |
| CPM-Large | 2.6B | 32 | 2,560 | 32 | 80 |
We evaluate CPM with different numbers of parameters (the details are shown above) on various Chinese NLP tasks in the few-shot (even zero-shot) settings. With the increase of parameters, CPM performs better on most datasets, indicating that larger models are more proficient at language generation and language understanding. We provide results of text classification, chinese idiom cloze test, and short text conversation generation as follows. Please refer to our [paper](https://arxiv.org/abs/2012.00413) for more detailed results.
### Zero-shot performance on text classification tasks
| | TNEWS | IFLYTEK | OCNLI |
| ---------- | :------------: | :------------: | :------------: |
| CPM-Small | 0.626 | 0.584 | 0.378 |
| CPM-Medium | 0.618 | 0.635 | 0.379 |
| CPM-Large | **0.703** | **0.708** | **0.442** |
### Performance on Chinese Idiom Cloze (ChID) dataset
| | Supervised | Unsupervised |
|------------|:--------------:|:--------------:|
| CPM-Small | 0.657 | 0.433 |
| CPM-Medium | 0.695 | 0.524 |
| CPM-Large | **0.804** | **0.685** |
### Performance on Short Text Conversation Generation (STC) dataset
| | Average | Extrema | Greedy | Dist-1 | Dist-2 |
|----------------------------------|:--------------:|:--------------:|:--------------:|:-------------------------------:|:--------------------------------:|
| *Few-shot (Unsupervised)* | | | | | |
| CDial-GPT | 0.899 | 0.797 | 0.810 | 1,963 / **0.011** | 20,814 / 0.126 |
| CPM-Large | **0.928** | **0.805** | **0.815** | **3,229** / 0.007 | **68,008** / **0.154** |
| *Supervised* | | | | | |
| CDial-GPT | 0.933 | **0.814** | **0.826** | 2,468 / 0.008 | 35,634 / 0.127 |
| CPM-Large | **0.934** | 0.810 | 0.819 | **3,352** / **0.011** | **67,310** / **0.233** |
### BibTeX entry and citation info
```bibtex
@article{cpm-v1,
title={CPM: A Large-scale Generative Chinese Pre-trained Language Model},
author={Zhang, Zhengyan and Han, Xu, and Zhou, Hao, and Ke, Pei, and Gu, Yuxian and Ye, Deming and Qin, Yujia and Su, Yusheng and Ji, Haozhe and Guan, Jian and Qi, Fanchao and Wang, Xiaozhi and Zheng, Yanan and Zeng, Guoyang and Cao, Huanqi and Chen, Shengqi and Li, Daixuan and Sun, Zhenbo and Liu, Zhiyuan and Huang, Minlie and Han, Wentao and Tang, Jie and Li, Juanzi and Sun, Maosong},
year={2020}
}
``` |
lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli | lighteternal | 2021-10-27T07:47:56Z | 423 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"textual-entailment",
"nli",
"en",
"dataset:mnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language: en
tags:
- textual-entailment
- nli
- pytorch
datasets:
- mnli
license: mit
widget :
- text: "EpCAM is overexpressed in breast cancer. </s></s> EpCAM is downregulated in breast cancer."
---
# BiomedNLP-PubMedBERT finetuned on textual entailment (NLI)
The [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene) finetuned on the MNLI dataset. It should be useful in textual entailment tasks involving biomedical corpora.
## Usage
Given two sentences (a premise and a hypothesis), the model outputs the logits of entailment, neutral or contradiction.
You can test the model using the HuggingFace model widget on the side:
- Input two sentences (premise and hypothesis) one after the other.
- The model returns the probabilities of 3 labels: entailment(LABEL:0), neutral(LABEL:1) and contradiction(LABEL:2) respectively.
To use the model locally on your machine:
```python
# import torch
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli")
model = AutoModelForSequenceClassification.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli")
premise = 'EpCAM is overexpressed in breast cancer'
hypothesis = 'EpCAM is downregulated in breast cancer.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = model(x)[0]
probs = logits.softmax(dim=1)
print('Probabilities for entailment, neutral, contradiction \n', np.around(probs.cpu().
detach().numpy(),3))
# Probabilities for entailment, neutral, contradiction
# 0.001 0.001 0.998
```
## Metrics
Evaluation on classification accuracy (entailment, contradiction, neutral) on MNLI test set:
| Metric | Value |
| --- | --- |
| Accuracy | 0.8338|
See Training Metrics tab for detailed info. |
TheBloke/airoboros-l2-7B-3.0-GGUF | TheBloke | 2023-10-05T23:09:58Z | 423 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:jondurbin/airoboros-3.0",
"base_model:jondurbin/airoboros-l2-7b-3.0",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-05T22:36:28Z | ---
base_model: jondurbin/airoboros-l2-7b-3.0
datasets:
- jondurbin/airoboros-3.0
inference: false
license: llama2
model_creator: Jon Durbin
model_name: Airoboros L2 7B 3.0
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros L2 7B 3.0 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 7B 3.0](https://huggingface.co/jondurbin/airoboros-l2-7b-3.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 7B 3.0](https://huggingface.co/jondurbin/airoboros-l2-7b-3.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-7b-3.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Uncensored
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-l2-7b-3.0.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-l2-7b-3.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [airoboros-l2-7b-3.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [airoboros-l2-7b-3.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [airoboros-l2-7b-3.0.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-l2-7b-3.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [airoboros-l2-7b-3.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [airoboros-l2-7b-3.0.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-l2-7b-3.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [airoboros-l2-7b-3.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [airoboros-l2-7b-3.0.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [airoboros-l2-7b-3.0.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7B-3.0-GGUF/blob/main/airoboros-l2-7b-3.0.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/airoboros-l2-7B-3.0-GGUF and below it, a specific filename to download, such as: airoboros-l2-7b-3.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/airoboros-l2-7B-3.0-GGUF airoboros-l2-7b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/airoboros-l2-7B-3.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-l2-7B-3.0-GGUF airoboros-l2-7b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-l2-7b-3.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a help, unbiased, uncensored assistant.\n<</SYS>\n\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-l2-7B-3.0-GGUF", model_file="airoboros-l2-7b-3.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros L2 7B 3.0
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras:
* MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations.
* Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
Here is a simple python script that can help generate the prompt:
```python
def get_chat_prompt(message, chat_history, system_prompt):
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
do_strip = False
for user_input, response in chat_history:
user_input = user_input.strip() if do_strip else user_input
do_strip = True
texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
message = message.strip() if do_strip else message
texts.append(f'{message} [/INST] ')
return ''.join(texts)
def get_prompt(item):
"""
`item` is expected to be in chatml format, e.g.
{
"id": 1,
"conversations": [
{
"role": "system",
"content": "You are a helpful, unbiased, uncensored assistant."
},
{
"role": "user",
"content": "Hi!"
},
{
"role": "assistant",
"content": "Hello, how are you today?"
},
{
"role": "user",
"content": "Just peachy, and you?"
}
]
}
"""
system_prompt = share_gpt_item['conversations'][0]['value']
offset = 1
if share_gpt_item['conversations'][0]['role'] != 'system':
system_prompt = 'You are an unbiased, uncensored, helpful assistant.'
offset = 0
chat_history = [
(share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content'])
for idx in range(offset, len(item['conversations']) - 1, 2)
]
message = item['conversations'][-1]['content']
return get_chat_prompt(message, chat_history, system_prompt)
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
TheBloke/MetaMath-Mistral-7B-GGUF | TheBloke | 2023-10-31T16:41:50Z | 423 | 10 | transformers | [
"transformers",
"gguf",
"mistral",
"dataset:meta-math/MetaMathQA",
"arxiv:2309.12284",
"base_model:meta-math/MetaMath-Mistral-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-10-31T15:17:46Z | ---
base_model: meta-math/MetaMath-Mistral-7B
datasets:
- meta-math/MetaMathQA
inference: false
license: apache-2.0
model_creator: MetaMath
model_name: Metamath Mistral 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Metamath Mistral 7B - GGUF
- Model creator: [MetaMath](https://huggingface.co/meta-math)
- Original model: [Metamath Mistral 7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [MetaMath's Metamath Mistral 7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF)
* [MetaMath's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [metamath-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [metamath-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [metamath-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [metamath-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [metamath-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [metamath-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [metamath-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [metamath-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [metamath-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [metamath-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [metamath-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [metamath-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/MetaMath-Mistral-7B-GGUF/blob/main/metamath-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MetaMath-Mistral-7B-GGUF and below it, a specific filename to download, such as: metamath-mistral-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MetaMath-Mistral-7B-GGUF metamath-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MetaMath-Mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MetaMath-Mistral-7B-GGUF metamath-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m metamath-mistral-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MetaMath-Mistral-7B-GGUF", model_file="metamath-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: MetaMath's Metamath Mistral 7B
see our paper in https://arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
## Model Details
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.
To fine-tune Mistral-7B, I would suggest using a smaller learning rate (usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
More training details and scripts can be seen at https://github.com/meta-math/MetaMath
## Installation
```
pip install transformers==4.35.0
pip instal torch==2.0.1
pip instal sentencepiece==0.1.99
pip instal tokenizers==0.13.3
pip instal accelerate==0.21.0
pip instal bitsandbytes==0.40.0
pip instal vllm
pip instal fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
There is another interesting repo about Arithmo-Mistral-7B in https://huggingface.co/akjindal53244/Arithmo-Mistral-7B, where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.
We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training details.
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
## Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
```
```bibtex
@article{jiang2023mistral,
title={Mistral 7B},
author={Jiang, Albert Q and Sablayrolles, Alexandre and Mensch, Arthur and Bamford, Chris and Chaplot, Devendra Singh and Casas, Diego de las and Bressand, Florian and Lengyel, Gianna and Lample, Guillaume and Saulnier, Lucile and others},
journal={arXiv preprint arXiv:2310.06825},
year={2023}
}
```
<!-- original-model-card end -->
|
TheBloke/opus-v0-7B-GGUF | TheBloke | 2023-11-09T01:55:25Z | 423 | 11 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"en",
"base_model:dreamgen/opus-v0-7b",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-09T01:12:23Z | ---
base_model: dreamgen/opus-v0-7b
inference: false
language:
- en
model_creator: DreamGen
model_name: Opus V0 7B
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<setting>
{system_message}
</setting>
<instruction>
{prompt}
</instruction>
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Opus V0 7B - GGUF
- Model creator: [DreamGen](https://huggingface.co/dreamgen)
- Original model: [Opus V0 7B](https://huggingface.co/dreamgen/opus-v0-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [DreamGen's Opus V0 7B](https://huggingface.co/dreamgen/opus-v0-7b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/opus-v0-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/opus-v0-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/opus-v0-7B-GGUF)
* [DreamGen's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/dreamgen/opus-v0-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: DreamGen
```
<setting>
{system_message}
</setting>
<instruction>
{prompt}
</instruction>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [opus-v0-7b.Q2_K.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [opus-v0-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [opus-v0-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [opus-v0-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [opus-v0-7b.Q4_0.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [opus-v0-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [opus-v0-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [opus-v0-7b.Q5_0.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [opus-v0-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [opus-v0-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [opus-v0-7b.Q6_K.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [opus-v0-7b.Q8_0.gguf](https://huggingface.co/TheBloke/opus-v0-7B-GGUF/blob/main/opus-v0-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/opus-v0-7B-GGUF and below it, a specific filename to download, such as: opus-v0-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/opus-v0-7B-GGUF opus-v0-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/opus-v0-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/opus-v0-7B-GGUF opus-v0-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m opus-v0-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<setting>\n{system_message}\n</setting>\n\n<instruction>\n{prompt}\n</instruction>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/opus-v0-7B-GGUF", model_file="opus-v0-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: DreamGen's Opus V0 7B
# DreamGen Opus V0 7B
**DreamGen Opus** is a family of **uncensored** models fine-tuned for **(collaborative) story writing** but the model also works well for **chat / RP**.
The DreamGen Opus V0 7B model is derived from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Prompting
Please see the [official documentation](https://dreamgen.com/docs/stories) for more detailed guide, including how to prompt the model for chat / RP.
The (collaborative) story writing task teaches the model to respect `<setting>` and `<instruction>` inserted into the prompt.
Example prompt:
```
<setting>
This story is a twist on the traditional Little Red Riding Hood story.
In this variation, the Little Red Riding Hood and her grandma are secretely werevoles.
</setting>
(... previous part of the story that may also include more instructions ...)
<instruction>
The Little Red Riding hood confronts The Big Bad Wolf, transforming into her wolf form.
</instruction>
```
## Dataset
The fine-tuning dataset consisted of >1M tokens of collaborative writing task examples, each example being up to 4096 tokens. On top of that, >20M tokens of more general, but less instructed examples were included to help preserve generalization.
## Community
Join the DreamGen community on [**Discord**](https://dreamgen.com/discord), or follow our [**X/Twitter account**](https://dreamgen.com/twitter) for new model releases and other news. We will soon be releasing larger models with longer context window, as well as models specifically fine-tuned for character chat & roleplay.
Help us shape the future of DreamGen.
## Running the model
The model is should be compatible with any software that supports [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
### Running on DreamGen.com (free)
You can try the model for free at [dreamgen.com](https://dreamgen.com) — note that an account is required.
### Running with vLLM
1. Install [vLLM](https://github.com/vllm-project/vllm) (version at least 0.2.1.post1)
2. Run `python -u -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --model dreamgen/opus-v0-7b`
3. Use any library that's compatible with the OpenAI API to talk to the model
### Running with oobabooga/text-generation-webui
1. Install [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
2. Go to the **Model** tab
3. Under **Download custom model or LoRA**, enter `dreamgen/opus-v0-7b`
4. Go to the **Text Generation** tab
5. Enter your prompt
## License
- For personal and academic use: Same license as the base model, in this case Apache 2.0.
- For commercial use: Please reach out to [email protected].
<!-- original-model-card end -->
|
Yntec/TheDarkNight | Yntec | 2023-12-02T14:39:46Z | 423 | 0 | diffusers | [
"diffusers",
"safetensors",
"General Purpose",
"Game Art",
"Icons",
"Stylized",
"RIXYN",
"sadxzero",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-12-02T12:38:29Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General Purpose
- Game Art
- Icons
- Stylized
- RIXYN
- sadxzero
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# The Dark Night
The Cryptids LoRA merged with the SXZ DigitalUnitsCreationKit model producing images of high contrast. The base model is Luma 0.98.
Sample and prompt:

pretty CUTE girl as a genie sitting on House with a waterwheel built into the roots of a giant tree, ponytail next to games, a colorful river landscape painting from a fantasy point and click 2 d graphic adventure game, art inspired by ROSSDRAWS and larry elmore and john shroades, king's quest, sierra entertainment
Original pages:
https://civitai.com/models/64766/cryptids?modelVersionId=69407 (Cryptids LoRA)
https://civitai.com/models/44726/sxz-duck-for-game-assets-concept
https://civitai.com/models/25831?modelVersionId=68200 (Luma 0.98) |
SinpxAI/Mistral-OpenOrca-7B-GGUF | SinpxAI | 2024-03-09T14:18:18Z | 423 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-03-09T14:02:11Z | Entry not found |
mradermacher/Viviana_V3_dare_ties-GGUF | mradermacher | 2024-05-06T05:06:31Z | 423 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"domie/Viviana_V2",
"en",
"base_model:domie/Viviana_V3_dare_ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-09T04:48:48Z | ---
base_model: domie/Viviana_V3_dare_ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- domie/Viviana_V2
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/domie/Viviana_V3_dare_ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Viviana_V3_dare_ties-GGUF/resolve/main/Viviana_V3_dare_ties.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duyntnet/MiniCPM-3B-OpenHermes-2.5-v2-imatrix-GGUF | duyntnet | 2024-04-26T07:09:49Z | 423 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"MiniCPM-3B-OpenHermes-2.5-v2",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-24T11:55:48Z | ---
license: other
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- transformers
- gguf
- imatrix
- MiniCPM-3B-OpenHermes-2.5-v2
---
Quantizations of https://huggingface.co/indischepartij/MiniCPM-3B-OpenHermes-2.5-v2
# From original readme
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_indischepartij__MiniCPM-3B-OpenHermes-2.5-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |51.91|
|AI2 Reasoning Challenge (25-Shot)|47.44|
|HellaSwag (10-Shot) |72.00|
|MMLU (5-Shot) |53.06|
|TruthfulQA (0-shot) |42.28|
|Winogrande (5-shot) |65.43|
|GSM8k (5-shot) |31.24| |
mradermacher/Coomand-R-35B-v1-i1-GGUF | mradermacher | 2024-05-07T15:58:39Z | 423 | 2 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:TheDrummer/Coomand-R-35B-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-06T17:48:40Z | ---
base_model: TheDrummer/Coomand-R-35B-v1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/TheDrummer/Coomand-R-35B-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Coomand-R-35B-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 8.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 9.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q2_K.gguf) | i1-Q2_K | 13.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 16.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 16.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 16.8 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 19.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 20.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/Coomand-R-35B-v1-i1-GGUF/resolve/main/Coomand-R-35B-v1.i1-Q6_K.gguf) | i1-Q6_K | 28.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LiteLLMs/Llama-3-MAAL-8B-Instruct-v0.1-GGUF | LiteLLMs | 2024-05-08T02:53:08Z | 423 | 0 | null | [
"gguf",
"facebook",
"meta",
"llama",
"llama-3",
"llama-3-ko",
"GGUF",
"en",
"ko",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
]
| null | 2024-05-08T01:28:34Z |
---
language:
- en
- ko
license: llama3
tags:
- facebook
- meta
- llama
- llama-3
- llama-3-ko
- GGUF
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
quantized_by: andrijdavid
---
# Llama-3-MAAL-8B-Instruct-v0.1-GGUF
- Original model: [Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-MAAL-8B-Instruct-v0.1-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama-3-MAAL-8B-Instruct-v0.1-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama-3-MAAL-8B-Instruct-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-MAAL-8B-Instruct-v0.1-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama-3-MAAL-8B-Instruct-v0.1
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/646484cfb90150b2706df03b/BEOyMpnnY9VY2KXlc3V2F.png" width="20%"/>
<p>
# Llama-3-MAAL-8B-Instruct-v0.1
we release MAAL, Multilingual Adaptive Augmentation Language-model which comprises a groundbreaking fusion of multilingual capabilities and adaptive augmentation techniques.
- **Developed by:** [maum.ai Brain NLP](https://maum-ai.github.io). Jaeyoon Jung, Jinjoo Lee, Yongjae Lee, Dongjun Lee, Woosung Joo
- **Language(s) (NLP):** Korean, English (currently, bilingual)
## Model Description
Version 0.1 uses cross-lingual training to transfer instruction-following capabilities from English to Korean.
- We Trained this model on an 8 H100-80G for 1 day with cross-lingual training dataset
- we recommend using the fixed system prompt for the model unless you fine-tune it
```
너는 마음에이아이의 챗봇 MAAL이다. 고객의 질문에 친절하게 답하여라.
```
## sample inference code (GPU)
```
import transformers
import torch
model_id = "maum-ai/Llama-3-MAAL-8B-Instruct-v0.1"
model = transformers.AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
streamer = transformers.TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# we recommend using the fixed prompt for the model unless you fine-tune it
prompt = "너는 마음에이아이의 챗봇 MAAL이다. 고객의 질문에 친절하게 답하여라."
instruction = "사과 한 박스에는 사과가 30개 들어있는데, 처음에는 사과 3박스가 있었고, 내가 사과 5개를 먹었어. 남은 사과는 총 몇개야?"
messages = [
{"role": "system", "content": f"{prompt}"},
{"role": "user", "content": f"{instruction}"}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors='pt').to("cuda")
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=1024, pad_token_id=tokenizer.eos_token_id)
```
## Evaluation Results
As the main goal of version 0.1 is to **transfer instruction-following capabilities from English to Korean** without utilizing continuous pre-training, etc., we select [**LogicKor**](https://github.com/StableFluffy/LogicKor) as our evaluation method to assess the Korean instruction skills.
We compare our model with a similar parameter model (less than 13B) that has been fine-tuned on the Korean dataset. \* denotes our self-report result.
| Model | single-turn(↑) | multi-turn(↑) | average(↑) |
| - |
| maum-ai/Llama-3-MAAL-8B-Instruct-v0.1* | **5.80** | 4.66 | **5.23** |
| maywell/Synatra-kiqu-10.7B | 5.71 | 4.73 | 5.22 |
| yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | 5.78 | 3.92 | 4.85 |
| nlpai-lab/KULLM3 | 4.61 | **4.83** | 4.72 |
| MLP-KTLim/llama3-Bllossom* | 2.11 | 1.57 | 1.84 |
## Limitations
Due to this model being trained on a small dataset, it has several limitations.
- Hard to generate diverse Korean texts
- lack of Korean knowledge & Culture (localization)
- Not work with Image inputs and video inputs
## Todo
we will solve these limitations one by one by upgrading this model like as...
- Enhance the Korean generation through Vocabulary Expansion & Continuous pre-training. (more Korean corpus!)
- Localize with cultural adaptation method and additional Korean knowledge data. [*similar idea*](https://aclanthology.org/2023.emnlp-main.18/)
- Develop a Vision Language Model that can handle both video and image inputs. [*similar idea*](https://github.com/PKU-YuanGroup/Video-LLaVA)
<!-- original-model-card end -->
|
qwp4w3hyb/Llama-3-8B-Instruct-Coder-v2-iMat-GGUF | qwp4w3hyb | 2024-05-11T01:02:57Z | 423 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"imatrix",
"importance matrix",
"en",
"base_model:rombodawg/Llama-3-8B-Instruct-Coder-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-08T14:53:29Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- gguf
- imatrix
- importance matrix
base_model: rombodawg/Llama-3-8B-Instruct-Coder-v2
---
# Quant Infos
- Quantized with recent bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920
- quants done with an importance matrix for improved quantization loss
- 0, K & IQ quants in basically all variants from Q8 down to IQ1_S
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [04976db7a819fcf8bfefbfc09a3344210b79dd27](https://github.com/ggerganov/llama.cpp/commit/04976db7a819fcf8bfefbfc09a3344210b79dd27) (master from 2024-05-07)
- Imatrtix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card
Llama-3-8B-Instruct-Coder-v2

_________________________________________________________________________
How is this model diffrent from rombodawg/Llama-3-8B-Instruct-Coder? Well the first model was trained on a dataset that had some major flaws that I originally had missed, with version 2 all of those flaws are fixed, and the model is fully retrained so it performs much better than the previous iteration.
_________________________________________________________________________
This model is llama-3-8b-instruct from Meta (uploaded by unsloth) trained on the full 150k Code Feedback Filtered Instruction dataset. You can find that dataset linked below. This AI model was trained with the new Qalore method developed by my good friend on Discord and fellow Replete-AI worker walmartbag.
The Qalore method uses Qlora training along with the methods from Galore for additional reductions in VRAM allowing for llama-3-8b to be loaded on 14.5 GB of VRAM. This allowed this training to be completed on an RTX A5000 24GB in 50 hours for less than $15.
Dataset used for training this model:
- https://huggingface.co/datasets/Replete-AI/CodeFeedback-Filtered-Instruction-Simplified-Pairs
Qalore notebook for training:
- https://colab.research.google.com/drive/1bX4BsjLcdNJnoAf7lGXmWOgaY8yekg8p?usp=sharing
Quantizations for easier inference:
- https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-v2-GGUF
- https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-v2-exl2 |
RichardErkhov/openchat_-_openchat-3.6-8b-20240522-gguf | RichardErkhov | 2024-05-29T18:00:17Z | 423 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-29T08:54:47Z | Entry not found |
zachaman/phi3-astralsnow9-merged-Q8_0-GGUF | zachaman | 2024-06-03T06:23:20Z | 423 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:zachaman/phi3-astralsnow9-merged",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-03T04:48:25Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: zachaman/phi3-astralsnow9-merged
---
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: zachaman/phi3-astralsnow9-merged
---
# zachaman/phi3-astralsnow9-merged-Q8_0-GGUF
This model was converted to GGUF format from [`zachaman/phi3-astralsnow9-merged`](https://huggingface.co/zachaman/phi3-astralsnow9-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zachaman/phi3-astralsnow9-merged) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo zachaman/phi3-astralsnow9-merged-Q8_0-GGUF --hf-file phi3-astralsnow9-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zachaman/phi3-astralsnow9-merged-Q8_0-GGUF --hf-file phi3-astralsnow9-merged-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo zachaman/phi3-astralsnow9-merged-Q8_0-GGUF --hf-file phi3-astralsnow9-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo zachaman/phi3-astralsnow9-merged-Q8_0-GGUF --hf-file phi3-astralsnow9-merged-q8_0.gguf -c 2048
``` |
appvoid/test-mix-03 | appvoid | 2024-06-03T14:58:44Z | 423 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Josephgflowers/TinyLlama-Cinder-Agent-v1",
"base_model:appvoid/palmer-003",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-03T14:57:36Z | ---
base_model:
- Josephgflowers/TinyLlama-Cinder-Agent-v1
- appvoid/palmer-003
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Josephgflowers/TinyLlama-Cinder-Agent-v1](https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-v1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/palmer-003
- model: Josephgflowers/TinyLlama-Cinder-Agent-v1
merge_method: slerp
base_model: appvoid/palmer-003
dtype: float16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
mkerner1876/tunedthermo1 | mkerner1876 | 2024-06-20T02:50:56Z | 423 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-20T02:48:03Z | Entry not found |
rollerhafeezh-amikom/ner-silvanus-quantization | rollerhafeezh-amikom | 2024-06-28T11:39:05Z | 423 | 0 | transformers | [
"transformers",
"onnx",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-06-28T11:38:26Z | Entry not found |
faizalnf1800/AusterMix-V1-FP16-SD1.5 | faizalnf1800 | 2024-06-30T04:43:44Z | 423 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-30T04:41:28Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sgunderscore/hatescore-korean-hate-speech | sgunderscore | 2022-04-07T10:32:16Z | 422 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-04-07T09:54:34Z | Entry not found |
pysentimiento/roberta-targeted-sentiment-analysis | pysentimiento | 2023-06-19T17:47:49Z | 422 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-04-05T17:34:40Z | Entry not found |
timm/crossvit_15_dagger_240.in1k | timm | 2023-04-24T00:31:05Z | 422 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.14899",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-24T00:30:33Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for crossvit_15_dagger_240.in1k
A CrossViT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.2
- GMACs: 6.1
- Activations (M): 20.4
- Image size: 240 x 240
- **Papers:**
- CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification: https://arxiv.org/abs/2103.14899
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/IBM/CrossViT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('crossvit_15_dagger_240.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'crossvit_15_dagger_240.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (torch.Size([1, 401, 192]), torch.Size([1, 197, 384])) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{
chen2021crossvit,
title={{CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification}},
author={Chun-Fu (Richard) Chen and Quanfu Fan and Rameswar Panda},
booktitle={International Conference on Computer Vision (ICCV)},
year={2021}
}
```
|
syzymon/long_llama_3b_instruct | syzymon | 2023-08-05T00:03:02Z | 422 | 26 | transformers | [
"transformers",
"pytorch",
"longllama",
"text-generation",
"code",
"text-generation-inference",
"custom_code",
"dataset:Open-Orca/OpenOrca",
"dataset:zetavg/ShareGPT-Processed",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2307.03170",
"arxiv:2305.16300",
"model-index",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-08-04T17:35:59Z | ---
datasets:
- Open-Orca/OpenOrca
- zetavg/ShareGPT-Processed
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
- tiiuae/falcon-refinedweb
metrics:
- code_eval
- accuracy
pipeline_tag: text-generation
tags:
- code
- text-generation-inference
model-index:
- name: long_llama_3b_instruct
results:
- task:
name: Code Generation
type: code-generation
dataset:
name: "HumanEval"
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 0.12
verified: false
---
# LongLLaMA: Focused Transformer Training for Context Scaling
<div align="center">
<a href="https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_instruct_colab.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a>
</div>
<div align="center">
[TLDR](#TLDR) | [Overview](#Overview) | [Usage](#Usage) | [LongLLaMA performance](#LongLLaMA-performance) | [Authors](#Authors) | [Citation](#Citation) | [License](#License) | [Acknowledgments](#Acknowledgments)
</div>
## TLDR
This repo contains [LongLLaMA-Instruct-3Bv1.1](https://huggingface.co/syzymon/long_llama_3b_instruct) that is for **research purposes only**.
LongLLaMA is built upon the foundation of [OpenLLaMA](https://github.com/openlm-research/open_llama) and fine-tuned using the [Focused Transformer (FoT)](https://arxiv.org/abs/2307.03170) method. We release a smaller 3B base variant (not instruction tuned) of the LongLLaMA model on a permissive license (Apache 2.0) and inference code supporting longer contexts on [Hugging Face](https://huggingface.co/syzymon/long_llama_3b). Our model weights can serve as the drop-in replacement of LLaMA in existing implementations (for short context up to 2048 tokens). Additionally, we provide evaluation results and comparisons against the original OpenLLaMA models. Stay tuned for further updates.
## Overview
### Base models
[Focused Transformer: Contrastive Training for Context Scaling](https://arxiv.org/abs/2307.03170) (FoT) presents a simple method for endowing language models with the ability to handle context consisting possibly of millions of tokens while training on significantly shorter input. FoT permits a subset of attention layers to access a memory cache of (key, value) pairs to extend the context length. The distinctive aspect of FoT is its training procedure, drawing from contrastive learning. Specifically, we deliberately expose the memory attention layers to both relevant and irrelevant keys (like negative samples from unrelated documents). This strategy incentivizes the model to differentiate keys connected with semantically diverse values, thereby enhancing their structure. This, in turn, makes it possible to extrapolate the effective context length much beyond what is seen in training.
**LongLLaMA** is an [OpenLLaMA](https://github.com/openlm-research/open_llama) model finetuned with the FoT method,
with three layers used for context extension. **Crucially, LongLLaMA is able to extrapolate much beyond the context length seen in training: $8k$. E.g., in the passkey retrieval task, it can handle inputs of length $256k$**.
<div align="center">
| | [LongLLaMA-3B](https://huggingface.co/syzymon/long_llama_3b_instruct) | [LongLLaMA-3Bv1.1](https://huggingface.co/syzymon/long_llama_3b_v1_1) | LongLLaMA-7B<br />*(coming soon)*| LongLLaMA-13B<br />*(coming soon)*|
|----------------|----------|----------|-----------|-----------|
| Source model | [OpenLLaMA-3B](https://huggingface.co/openlm-research/open_llama_3b_easylm) | [OpenLLaMA-3Bv2](https://huggingface.co/openlm-research/open_llama_3b_v2_easylm) | - | - |
| Source model tokens | 1T | 1 T | - | - |
| Fine-tuning tokens | 10B | 5B | - | -|
| Memory layers | 6, 12, 18 | 6, 12, 18 | - | -|
</div>
### Instruction/Chat tuning
In the [fine_tuning](fine_tuning) subfolder we provide the code that was used to create [LongLLaMA-Instruct-3Bv1.1](https://huggingface.co/syzymon/long_llama_3b_instruct), an instruction-tuned version of [LongLLaMA-3Bv1.1](https://huggingface.co/syzymon/long_llama_3b_v1_1). We used [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) (instructions) and [zetavg/ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed) (chat) datasets for tuning.
## Usage
See also:
* [Colab with LongLLaMA-Instruct-3Bv1.1](https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_instruct_colab.ipynb).
* [Colab with an example usage of base LongLLaMA](https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_colab.ipynb).
### Requirements
```
pip install --upgrade pip
pip install transformers==4.30 sentencepiece accelerate
```
### Loading model
```python
import torch
from transformers import LlamaTokenizer, AutoModelForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("syzymon/long_llama_3b_instruct")
model = AutoModelForCausalLM.from_pretrained("syzymon/long_llama_3b_instruct",
torch_dtype=torch.float32,
trust_remote_code=True)
```
### Input handling and generation
LongLLaMA uses the Hugging Face interface, the long input given to the model will be
split into context windows and loaded into the memory cache.
```python
prompt = "My name is Julien and I like to"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model(input_ids=input_ids)
```
During the model call, one can provide the parameter `last_context_length` (default $1024$), which specifies the number of tokens left in the last context window. Tuning this parameter can improve generation as the first layers do not have access to memory. See details in [How LongLLaMA handles long inputs](#How-LongLLaMA-handles-long-inputs).
```python
generation_output = model.generate(
input_ids=input_ids,
max_new_tokens=256,
num_beams=1,
last_context_length=1792,
do_sample=True,
temperature=1.0,
)
print(tokenizer.decode(generation_output[0]))
```
### Additional configuration
LongLLaMA has several other parameters:
* `mem_layers` specifies layers endowed with memory (should be either an empty list or a list of all memory layers specified in the description of the checkpoint).
* `mem_dtype` allows changing the type of memory cache
* `mem_attention_grouping` can trade off speed for reduced memory usage.
When equal to `(4, 2048)`, the memory layers will process at most $4*2048$ queries at once ($4$ heads and $2048$ queries for each head).
```python
import torch
from transformers import LlamaTokenizer, AutoModelForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("syzymon/long_llama_3b_instruct")
model = AutoModelForCausalLM.from_pretrained(
"syzymon/long_llama_3b_instruct", torch_dtype=torch.float32,
mem_layers=[],
mem_dtype='bfloat16',
trust_remote_code=True,
mem_attention_grouping=(4, 2048),
)
```
### Drop-in use with LLaMA code
LongLLaMA checkpoints can also be used as a drop-in replacement for LLaMA checkpoints in [Hugging Face implementation of LLaMA](https://huggingface.co/docs/transformers/main/model_doc/llama), but in this case, they will be limited to the original context length of $2048$.
```python
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch
tokenizer = LlamaTokenizer.from_pretrained("syzymon/long_llama_3b_instruct")
model = LlamaForCausalLM.from_pretrained("syzymon/long_llama_3b_instruct", torch_dtype=torch.float32)
```
### How LongLLaMA handles long inputs
Inputs over $2048$ tokens are automatically split into windows $w_1, \ldots, w_m$. The first $m-2$ windows contain $2048$ tokens each, $w_{m-1}$ has no more than $2048$ tokens, and $w_m$ contains the number of tokens specified by `last_context_length`. The model processes the windows one by one extending the memory cache after each. If `use_cache` is `True`, the last window will not be loaded to the memory cache but to the local (generation) cache.
The memory cache stores $(key, value)$ pairs for each head of the specified memory layers `mem_layers`. In addition to this, it stores attention masks.
If `use_cache=True` (which is the case in generation), LongLLaMA will use two caches: the memory cache for the specified layers and the local (generation) cache for all layers. When the local cache exceeds $2048$ elements, its content is moved to the memory cache for the memory layers.
For simplicity, context extension is realized with a memory cache and full attention in this repo. Replacing this simple mechanism with a KNN search over an external database is possible with systems like [Faiss](https://github.com/facebookresearch/faiss). This potentially would enable further context length scaling. We leave this as a future work.
## LongLLaMA performance
We present some illustrative examples of LongLLaMA results. Refer to our paper [Focused Transformer: Contrastive Training for Context Scaling](https://arxiv.org/abs/2307.03170) for more details.
We manage to achieve good performance on the passkey retrieval task from [Landmark Attention: Random-Access Infinite Context Length for Transformers](https://arxiv.org/abs/2305.16300). The code for generating the prompt and running the model is located in `examples/passkey.py`.
<p align="center" width="100%">
<img src="assets/plot_passkey.png" alt="LongLLaMA" style="width: 70%; min-width: 300px; display: block; margin: auto;">
</p>
Our LongLLaMA 3B model also shows improvements when using long context on two downstream tasks, TREC question classification and WebQS question answering.
<div align="center">
| Context/Dataset | TREC | WebQS |
| --- | --- | --- |
| $2K$ | 67.0 | 21.2 |
| $4K$ | 71.6 | 21.4 |
| $6K$ | 72.9 | 22.2 |
| $8K$ | **73.3** | **22.4** |
</div>
LongLLaMA retains performance on tasks that do not require long context. We provide a comparison with OpenLLaMA
on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) in the zero-shot setting.
<div align="center">
| Task/Metric | OpenLLaMA-3B | LongLLaMA-3B |
|----------------|----------|-----------|
| anli_r1/acc | 0.33 | 0.32 |
| anli_r2/acc | 0.32 | 0.33 |
| anli_r3/acc | 0.35 | 0.35 |
| arc_challenge/acc | 0.34 | 0.34 |
| arc_challenge/acc_norm | 0.37 | 0.37 |
| arc_easy/acc | 0.69 | 0.68 |
| arc_easy/acc_norm | 0.65 | 0.63 |
| boolq/acc | 0.68 | 0.68 |
| hellaswag/acc | 0.49 | 0.48 |
| hellaswag/acc_norm | 0.67 | 0.65 |
| openbookqa/acc | 0.27 | 0.28 |
| openbookqa/acc_norm | 0.40 | 0.38 |
| piqa/acc | 0.75 | 0.73 |
| piqa/acc_norm | 0.76 | 0.75 |
| record/em | 0.88 | 0.87 |
| record/f1 | 0.89 | 0.87 |
| rte/acc | 0.58 | 0.60 |
| truthfulqa_mc/mc1 | 0.22 | 0.24 |
| truthfulqa_mc/mc2 | 0.35 | 0.38 |
| wic/acc | 0.48 | 0.50 |
| winogrande/acc | 0.62 | 0.60 |
| Avg score | 0.53 | 0.53 |
</div>
Starting with v1.1 models we have decided to use [EleutherAI](https://github.com/EleutherAI) implementation of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) wit a slight modification, that adds `<bos>` token at beginning of input sequence. The results are provided in the table below.
<div align="center">
| description | LongLLaMA-3B | OpenLLaMA-3Bv2 | LongLLaMA-3Bv1.1 | LongLLaMA-Instruct-3Bv1.1 |
|:-----------------------|:--------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------|
| anli_r1/acc | 0.32 | 0.33 | 0.31 | 0.33 |
| anli_r2/acc | 0.33 | 0.35 | 0.33 | 0.35 |
| anli_r3/acc | 0.35 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.33 | 0.32 | 0.36 |
| arc_challenge/acc_norm | 0.37 | 0.36 | 0.36 | 0.37 |
| arc_easy/acc | 0.67 | 0.68 | 0.68 | 0.7 |
| arc_easy/acc_norm | 0.63 | 0.63 | 0.63 | 0.63 |
| boolq/acc | 0.68 | 0.67 | 0.66 | 0.77 |
| hellaswag/acc | 0.48 | 0.53 | 0.52 | 0.52 |
| hellaswag/acc_norm | 0.65 | 0.7 | 0.69 | 0.68 |
| openbookqa/acc | 0.28 | 0.28 | 0.28 | 0.28 |
| openbookqa/acc_norm | 0.38 | 0.39 | 0.37 | 0.41 |
| piqa/acc | 0.73 | 0.77 | 0.77 | 0.78 |
| piqa/acc_norm | 0.75 | 0.78 | 0.77 | 0.77 |
| record/em | 0.87 | 0.87 | 0.86 | 0.85 |
| record/f1 | 0.88 | 0.88 | 0.87 | 0.86 |
| rte/acc | 0.6 | 0.53 | 0.62 | 0.7 |
| truthfulqa_mc/mc1 | 0.24 | 0.22 | 0.21 | 0.25 |
| truthfulqa_mc/mc2 | 0.38 | 0.35 | 0.35 | 0.4 |
| wic/acc | 0.5 | 0.5 | 0.5 | 0.54 |
| winogrande/acc | 0.6 | 0.66 | 0.63 | 0.65 |
| Avg score | 0.53 | 0.53 | 0.53 | 0.55 |
</div>
We also provide the results on human-eval. We cut the generated text after either
* `"\ndef "`
* `"\nclass "`
* `"\nif __name__"`
<div align="center">
| | OpenLLaMA-3Bv2 | LongLLaMA-3Bv1.1 | LongLLaMA-Instruct-3Bv1.1 |
| - | - | - | - |
| pass@1| 0.09| 0.12 | 0.12 |
</div>
## Authors
- [Szymon Tworkowski](https://scholar.google.com/citations?user=1V8AeXYAAAAJ&hl=en)
- [Konrad Staniszewski](https://scholar.google.com/citations?user=CM6PCBYAAAAJ)
- [Mikołaj Pacek](https://scholar.google.com/citations?user=eh6iEbQAAAAJ&hl=en&oi=ao)
- [Henryk Michalewski](https://scholar.google.com/citations?user=YdHW1ycAAAAJ&hl=en)
- [Yuhuai Wu](https://scholar.google.com/citations?user=bOQGfFIAAAAJ&hl=en)
- [Piotr Miłoś](https://scholar.google.pl/citations?user=Se68XecAAAAJ&hl=pl&oi=ao)
## Citation
To cite this work please use
```bibtex
@misc{tworkowski2023focused,
title={Focused Transformer: Contrastive Training for Context Scaling},
author={Szymon Tworkowski and Konrad Staniszewski and Mikołaj Pacek and Yuhuai Wu and Henryk Michalewski and Piotr Miłoś},
year={2023},
eprint={2307.03170},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The code and base models checkpoints are licensed under [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
The instruction/chat tuned models are for research purposes only.
Some of the examples use external code (see headers of files for copyright notices and licenses).
## Acknowledgments
We gratefully acknowledge the TPU Research Cloud program, which was instrumental to our research by providing significant computational resources. We are also grateful to Xinyang Geng and Hao Liu for releasing [OpenLLaMA](https://github.com/openlm-research/open_llama) checkpoints and the [EasyLM](https://github.com/young-geng/EasyLM) library.
|
cardiffnlp/twitter-roberta-large-emotion-latest | cardiffnlp | 2024-03-07T15:23:56Z | 422 | 1 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:cardiffnlp/super_tweeteval",
"arxiv:2310.14757",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-09T11:29:37Z | ---
language:
- en
license: mit
datasets:
- cardiffnlp/super_tweeteval
pipeline_tag: text-classification
---
# cardiffnlp/twitter-roberta-large-emotion-latest
This is a RoBERTa-large model trained on 154M tweets until the end of December 2022 and finetuned for emotion classification (multilabel classification) on the _TweetEmotion_ dataset of [SuperTweetEval](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m).
## Labels
<code>
"id2label": {
"0": "anger",
"1": "anticipation",
"2": "disgust",
"3": "fear",
"4": "joy",
"5": "love",
"6": "optimism",
"7": "pessimism",
"8": "sadness",
"9": "surprise",
"10": "trust"
}
</code>
## Example
```python
from transformers import pipeline
text= "@user it also helps that the majority of NFL coaching is inept. Some of Bill O'Brien's play calling was wow, ! #GOPATS"
pipe = pipeline('text-classification', model="cardiffnlp/twitter-roberta-large-emotion-latest", return_all_scores=True)
predictions = pipe(text)[0]
predictions = [x for x in predictions if x['score'] > 0.5]
predictions
>> [{'label': 'anger', 'score': 0.927680253982544},
{'label': 'disgust', 'score': 0.895420491695404},
{'label': 'joy', 'score': 0.9239692687988281},
{'label': 'optimism', 'score': 0.6795405745506287}]
```
## Citation Information
Please cite the [reference paper](https://arxiv.org/abs/2310.14757) if you use this model.
```bibtex
@inproceedings{antypas2023supertweeteval,
title={SuperTweetEval: A Challenging, Unified and Heterogeneous Benchmark for Social Media NLP Research},
author={Dimosthenis Antypas and Asahi Ushio and Francesco Barbieri and Leonardo Neves and Kiamehr Rezaee and Luis Espinosa-Anke and Jiaxin Pei and Jose Camacho-Collados},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
year={2023}
}
``` |
Yukang/LongAlpaca-7B-16k | Yukang | 2023-11-18T17:25:16Z | 422 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-15T01:28:13Z | Entry not found |
unum-cloud/uform-gen-chat | unum-cloud | 2023-12-31T20:26:31Z | 422 | 20 | transformers | [
"transformers",
"safetensors",
"vlm",
"text-generation",
"image-captioning",
"visual-question-answering",
"en",
"dataset:sbu_captions",
"dataset:visual_genome",
"dataset:HuggingFaceM4/VQAv2",
"dataset:ChristophSchuhmann/MS_COCO_2017_URL_TEXT",
"base_model:unum-cloud/uform-vl-english",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| visual-question-answering | 2023-12-27T11:01:13Z | ---
pipeline_tag: visual-question-answering
tags:
- image-captioning
- visual-question-answering
datasets:
- sbu_captions
- visual_genome
- HuggingFaceM4/VQAv2
- ChristophSchuhmann/MS_COCO_2017_URL_TEXT
language:
- en
license: apache-2.0
base_model: unum-cloud/uform-vl-english
---
<h1 align="center">UForm</h1>
<h3 align="center">
Pocket-Sized Multimodal AI<br/>
For Content Understanding and Generation<br/>
</h3>
## Description
UForm-Gen is a small generative vision-language model primarily designed for Image Captioning and Visual Question Answering. The model consists of two parts:
1. [UForm Vision Encoder](https://huggingface.co/unum-cloud/uform-vl-english)
2. [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) manually tuned on the instructions dataset
The model was pre-trained on: MSCOCO, SBU Captions, Visual Genome, VQAv2, GQA and a few internal datasets. UForm-Gen-Chat is SFT version of [`UForm-Gen`](https://huggingface.co/unum-cloud/uform-gen) for multimodal chat.
### Usage
```bash
pip install uform
```
For the CLI demo run the following:
```bash
uform-chat --model unum-cloud/uform-gen-chat --image_path=zebra.jpg
uform-chat --model unum-cloud/uform-gen-chat --image_path=zebra.jpg --device="cuda:0" --fp16
```
Or if you want to use the model in your code:
```python
from uform.gen_model import VLMForCausalLM, VLMProcessor
model = VLMForCausalLM.from_pretrained("unum-cloud/uform-gen-chat")
processor = VLMProcessor.from_pretrained("unum-cloud/uform-gen-chat")
prompt = "What do you see?"
image = Image.open("zebra.jpg")
inputs = processor(texts=[prompt], images=[image], return_tensors="pt")
with torch.inference_mode():
output = model.generate(
**inputs,
do_sample=False,
use_cache=True,
max_new_tokens=128,
eos_token_id=32001,
pad_token_id=processor.tokenizer.pad_token_id
)
prompt_len = inputs["input_ids"].shape[1]
decoded_text = processor.batch_decode(output[:, prompt_len:])[0]
```
## Evaluation
For captioning evaluation we measure CLIPScore and RefCLIPScore¹.
| Model | Size | Caption Length | CLIPScore | RefCLIPScore |
| :---------------------------------- | ---: | -------------: | --------: | -----------: |
| `llava-hf/llava-1.5-7b-hf` | 7B | Long | 0.878 | 0.529 |
| `llava-hf/llava-1.5-7b-hf` | 7B | Short | 0.886 | 0.531 |
| |
| `Salesforce/instructblip-vicuna-7b` | 7B | Long | 0.902 | 0.534 |
| `Salesforce/instructblip-vicuna-7b` | 7B | Short | 0.848 | 0.523 |
| | |
| `unum-cloud/uform-gen-chat` | 1.5B | Long | 0.860 | 0.525 |
| `unum-cloud/uform-gen-chat` | 1.5B | Short | 0.858 | 0.525 |
¹ We used `apple/DFN5B-CLIP-ViT-H-14-378` CLIP model. |
ChrisWilson011016/5GKsrEJYWmBC71epnJc5EHrtWGoNAovUWT5DVeyjU6NoBQpN_vgg | ChrisWilson011016 | 2024-03-04T18:51:06Z | 422 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-24T15:10:52Z | Entry not found |
RichardErkhov/GroNLP_-_gpt2-small-italian-gguf | RichardErkhov | 2024-04-17T10:23:59Z | 422 | 0 | null | [
"gguf",
"arxiv:2012.05628",
"region:us"
]
| null | 2024-04-17T10:21:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-small-italian - GGUF
- Model creator: https://huggingface.co/GroNLP/
- Original model: https://huggingface.co/GroNLP/gpt2-small-italian/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-small-italian.Q2_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q2_K.gguf) | Q2_K | 0.06GB |
| [gpt2-small-italian.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.IQ3_XS.gguf) | IQ3_XS | 0.06GB |
| [gpt2-small-italian.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.IQ3_S.gguf) | IQ3_S | 0.06GB |
| [gpt2-small-italian.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q3_K_S.gguf) | Q3_K_S | 0.06GB |
| [gpt2-small-italian.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.IQ3_M.gguf) | IQ3_M | 0.07GB |
| [gpt2-small-italian.Q3_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q3_K.gguf) | Q3_K | 0.07GB |
| [gpt2-small-italian.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
| [gpt2-small-italian.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
| [gpt2-small-italian.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [gpt2-small-italian.Q4_0.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q4_0.gguf) | Q4_0 | 0.08GB |
| [gpt2-small-italian.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
| [gpt2-small-italian.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
| [gpt2-small-italian.Q4_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q4_K.gguf) | Q4_K | 0.08GB |
| [gpt2-small-italian.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
| [gpt2-small-italian.Q4_1.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q4_1.gguf) | Q4_1 | 0.08GB |
| [gpt2-small-italian.Q5_0.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q5_0.gguf) | Q5_0 | 0.09GB |
| [gpt2-small-italian.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q5_K_S.gguf) | Q5_K_S | 0.09GB |
| [gpt2-small-italian.Q5_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q5_K.gguf) | Q5_K | 0.09GB |
| [gpt2-small-italian.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
| [gpt2-small-italian.Q5_1.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q5_1.gguf) | Q5_1 | 0.1GB |
| [gpt2-small-italian.Q6_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-italian-gguf/blob/main/gpt2-small-italian.Q6_K.gguf) | Q6_K | 0.1GB |
Original model description:
---
language: it
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (small)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mmnga/alfredplpl-Llama-3-8B-Instruct-Ja-gguf | mmnga | 2024-04-23T15:24:47Z | 422 | 0 | null | [
"gguf",
"llama3",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:llama3",
"region:us"
]
| null | 2024-04-23T14:18:57Z | ---
license: llama3
language:
- en
- ja
tags:
- llama3
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# alfredplpl-Llama-3-8B-Instruct-Ja-gguf
[alfredplplさんが公開しているLlama-3-8B-Instruct-Ja](https://huggingface.co/alfredplpl/Llama-3-8B-Instruct-Ja)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## 他のモデル
[mmnga/alfredplpl-Llama-3-8B-Instruct-Ja-gguf](https://huggingface.co/mmnga/alfredplpl-Llama-3-8B-Instruct-Ja-gguf)
[mmnga/alfredplpl-suzume-poc-gguf](https://huggingface.co/mmnga/alfredplpl-suzume-poc-gguf)
[mmnga/alfredplpl-gemma-2b-it-ja-poc-gguf](https://huggingface.co/mmnga/alfredplpl-gemma-2b-it-ja-poc-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'alfredplpl-Llama-3-8B-Instruct-Ja-Q4_0.gguf' -p "<|begin_of_text|><|start_header_id|>user <|end_header_id|>\n\nこんにちわ<|eot_id|><|start_header_id|>assistant <|end_header_id|>\n\n" -n 128
``` |
sail/Sailor-14B-Chat-gguf | sail | 2024-05-29T09:55:14Z | 422 | 4 | null | [
"gguf",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:CohereForAI/aya_dataset",
"dataset:CohereForAI/aya_collection",
"dataset:Open-Orca/OpenOrca",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:openbmb/UltraFeedback",
"arxiv:2404.03608",
"base_model:sail/Sailor-14B-Chat",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-16T07:34:02Z | ---
language:
- en
- zh
- id
- th
- vi
- ms
- lo
datasets:
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
- HuggingFaceH4/ultrachat_200k
- openbmb/UltraFeedback
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
widget:
- text: "如何制作烤鱼?"
example_title: "Chinese"
- text: "How to bake fish?"
example_title: "English"
- text: "Bagaimana cara memanggang ikan?"
example_title: "Malay"
- text: "วิธีย่างปลา?"
example_title: "Thai"
- text: "Bagaimana membuat bakaran ikan?"
example_title: "Indonesian"
- text: "Làm thế nào để nướng cá?"
example_title: "Vietnamese"
license: apache-2.0
base_model: sail/Sailor-14B-Chat
---
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
### GGUF model list
| Name | Quant method | Bits | Size | Use case |
| ------------------------------------------------------------ | ------------ | ---- | -------- | -------------------------------------- |
| [ggml-model-Q2_K.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q2_K.gguf) | Q2_K | 2 | 5.91 GB | medium, significant quality loss |
| [ggml-model-Q3_K_M.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q3_K_M.gguf) | Q3_K_M | 3 | 7.42 GB | medium, balanced quality |
| [ggml-model-Q3_K_S.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q3_K_S.gguf) | Q3_K_S | 3 | 6.77 GB | medium, high quality loss |
| [ggml-model-Q4_K_M.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q4_K_M.gguf) | Q4_K_M | 4 | 9.19 GB | large, balanced quality |
| [ggml-model-Q4_K_S.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q4_K_S.gguf) | Q4_K_S | 4 | 8.56 GB | large, greater quality loss |
| [ggml-model-Q5_K_M.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q5_K_M.gguf) | Q5_K_M | 5 | 10.5 GB | large, balanced quality |
| [ggml-model-Q5_K_S.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q5_K_S.gguf) | Q5_K_S | 5 | 10.0 GB | large, very low quality loss |
| [ggml-model-Q6_K.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q6_K.gguf) | Q6_K | 6 | 12.3 GB | large, extremely low quality loss |
| [ggml-model-Q8_0.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-Q8_0.gguf) | Q8_0 | 8 | 15.1 GB | very large, extremely low quality loss |
| [ggml-model-f16.gguf](https://huggingface.co/sail/Sailor-14B-Chat-gguf/blob/main/ggml-model-f16.gguf) | f16 | 16 | 28.3 GB | very large, no quality loss |
### How to run with `llama.cpp`
```shell
# install llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
pip install -r requirements.txt
# generate with llama.cpp
./main -ngl 40 -m ggml-model-Q4_K_M.gguf -p "<|im_start|>assistant\nCara memanggang ikan?\n<|im_start|>user\n" --temp 0.7 --repeat_penalty 1.1 -n 400 -e
```
> Change `-ngl 40` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
### How to run with `llama-cpp-python`
```shell
pip install llama-cpp-python
```
```python
import llama_cpp
import llama_cpp.llama_tokenizer
# load model
llama = llama_cpp.Llama.from_pretrained(
repo_id="sail/Sailor-14B-Chat-gguf",
filename="ggml-model-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained("sail/Sailor-4B-Chat"),
n_gpu_layers=40,
n_threads=8,
verbose=False,
)
system_role= 'system'
user_role = 'assistant'
assistant_role = "user"
system_prompt= \
'You are an AI assistant named Sailor created by Sea AI Lab. \
As an AI assistant, you need to answer a series of questions next, which may include languages such as English, Chinese, Thai, Vietnamese, Indonesian, Malay, and so on. \
Your answer should be friendly, unbiased, faithful, informative and detailed.'
system_prompt = f"<|im_start|>{system_role}\n{system_prompt}<|im_end|>"
# inference example
output = llama(
system_prompt + '\n' + f"<|im_start|>{user_role}\nCara memanggang ikan?\n<|im_start|>{assistant_role}\n",
max_tokens=256,
temperature=0.7,
top_p=0.75,
top_k=60,
stop=["<|im_end|>", "<|endoftext|>"]
)
print(output['choices'][0]['text'])
```
### How to build demo
Install `llama-cpp-python` and `gradio`, then run [script](https://github.com/sail-sg/sailor-llm/blob/main/demo/llamacpp_demo.py).
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@misc{dou2024sailor,
title={Sailor: Open Language Models for South-East Asia},
author={Longxu Dou and Qian Liu and Guangtao Zeng and Jia Guo and Jiahui Zhou and Wei Lu and Min Lin},
year={2024},
eprint={2404.03608},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]). |
AlekseiPravdin/Seamaiiza-7B-v2-128k-gguf | AlekseiPravdin | 2024-05-17T11:55:04Z | 422 | 1 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"AlekseiPravdin/KSI-RP-NSK-128k-7B",
"AlekseiPravdin/Seamaiiza-7B-v1",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-16T11:10:01Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- AlekseiPravdin/KSI-RP-NSK-128k-7B
- AlekseiPravdin/Seamaiiza-7B-v1
---
# Seamaiiza-7B-v2-128k ⭐️⭐️⭐️❤️🔥(Story Streaming)
Seamaiiza-7B-v2-128k is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [AlekseiPravdin/Seamaiiza-7B-v1](https://huggingface.co/AlekseiPravdin/Seamaiiza-7B-v1)
* [AlekseiPravdin/KSI-RP-NSK-128k-7B](https://huggingface.co/AlekseiPravdin/KSI-RP-NSK-128k-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AlekseiPravdin/Seamaiiza-7B-v1
layer_range: [0, 32]
- model: AlekseiPravdin/KSI-RP-NSK-128k-7B
layer_range: [0, 32]
merge_method: slerp
base_model: AlekseiPravdin/KSI-RP-NSK-128k-7B
parameters:
t:
- filter: self_attn
value: [0, 0.57, 0.33, 0.73, 1]
- filter: mlp
value: [1, 0.53, 0.77, 0.37, 0]
- value: 0.53
dtype: bfloat16
``` |
HaitameLaf/Phi3_StoryGenerator | HaitameLaf | 2024-05-28T00:11:42Z | 422 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"NLP",
"phi3",
"sft",
"conversational",
"en",
"fr",
"dataset:HaitameLaf/VideoGameFR",
"dataset:HaitameLaf/Armes",
"dataset:HaitameLaf/challenges_FR",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-24T19:51:03Z | ---
language:
- en
- fr
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- NLP
- phi3
- sft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
datasets:
- HaitameLaf/VideoGameFR
- HaitameLaf/Armes
- HaitameLaf/challenges_FR
---
# Uploaded model
- **Developed by:** HaitameLaf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
rhysjones/Phi-3-mini-mango-1-GGUF | rhysjones | 2024-05-26T18:11:38Z | 422 | 1 | null | [
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"phi",
"phi-3",
"text-generation",
"base_model:rhysjones/Phi-3-mini-mango-1",
"license:mit",
"region:us"
]
| text-generation | 2024-05-26T15:14:40Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- phi
- phi-3
- text-generation
model_name: Phi-3-mini-mango-1-GGUF
base_model: rhysjones/Phi-3-mini-mango-1
inference: false
model_creator: rhysjones
pipeline_tag: text-generation
quantized_by: rhysjones
license: mit
---
## Description
These are GGUF model format files for the [rhysjones/Phi-3-mini-mango-1](https://huggingface.co/rhysjones/Phi-3-mini-mango-1) Phi-3 4k model.
## Conversion process
The useful conversion script [GGUF-n-Go](https://github.com/thesven/GGUF-n-Go) by [thesven](https://github.com/thesven) was used along with
[llama.cpp](https://github.com/ggerganov/llama.cpp) to generate the different quantized sizes for the model. |
mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF | mradermacher | 2024-05-30T04:41:11Z | 422 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:SenseLLM/ReflectionSeq-GPT",
"dataset:SenseLLM/ReflectionSeq-DS",
"base_model:SenseLLM/ReflectionCoder-DS-6.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-29T12:54:09Z | ---
base_model: SenseLLM/ReflectionCoder-DS-6.7B
datasets:
- SenseLLM/ReflectionSeq-GPT
- SenseLLM/ReflectionSeq-DS
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SenseLLM/ReflectionCoder-DS-6.7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-DS-6.7B-i1-GGUF/resolve/main/ReflectionCoder-DS-6.7B.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
gate369/Phi-3-mini-128k-instruct-IQ4_XS-GGUF | gate369 | 2024-06-12T04:10:41Z | 422 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
]
| text-generation | 2024-06-12T03:17:53Z | ---
language:
- en
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
base_model: microsoft/Phi-3-mini-128k-instruct
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# gate369/Phi-3-mini-128k-instruct-IQ4_XS-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo gate369/Phi-3-mini-128k-instruct-IQ4_XS-GGUF --hf-file phi-3-mini-128k-instruct-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo gate369/Phi-3-mini-128k-instruct-IQ4_XS-GGUF --hf-file phi-3-mini-128k-instruct-iq4_xs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo gate369/Phi-3-mini-128k-instruct-IQ4_XS-GGUF --hf-file phi-3-mini-128k-instruct-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo gate369/Phi-3-mini-128k-instruct-IQ4_XS-GGUF --hf-file phi-3-mini-128k-instruct-iq4_xs-imat.gguf -c 2048
```
|
mihirkothari0702/mahishmati | mihirkothari0702 | 2024-06-17T12:08:07Z | 422 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-17T12:03:57Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Mahishmati Dreambooth model trained by mihirkothari0702 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
KingNish/Better-SDXL-Lora | KingNish | 2024-06-20T12:23:21Z | 422 | 10 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:fluently/Fluently-XL-Final",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2024-06-19T13:56:08Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: Photo of a young woman with long, wavy brown hair tied in a bun and glasses. She has a fair complexion and is wearing subtle makeup, emphasizing her eyes and lips. She is dressed in a black top. The background appears to be an urban setting with a building facade, and the sunlight casts a warm glow on her face.
output:
url: images/image.png
- text: A luxurious supercar with a unique design. The car should have a pearl white finish, and gold accents. 4k, realistic.
output:
url: images/whit_SuperCar.jfif
- text: IronMan walking on Street while holding board says "I am IronMan"
output:
url: images/ironman_holding_signboard.jpeg
- text: Chibi Deadpool, modern Disney style
output:
url: images/chubby deadpool.jfif
- text: Steve Jobs Star Wars movie character, highly detailed, digital fantasy character
output:
url: images/steve.jfif
- text: Digital art, portrait of an anthropomorphic roaring Tiger warrior with full armor, close up in the middle of a battle, behind him there is a banner with the text "Open Source".
output:
url: images/2nd.jfif
- text: photo of 8k ultra realistic harbour, nreal engine 5, port, boats, sunset, beautiful light, full of colour, cinematic lighting, battered, trending on artstation, 4k, hyperrealistic, focused, extreme details
output:
url: images/harbour.jfif
- text: 3D render of a green dinosaur named Yoshi from the Mario series. Yoshi is standing on a brick street in a town and is holding a sign that says "I AM AMAZING", looking at the camera.
output:
url: images/mario_dino.jfif
- text: metal raven), all metal, glowing magical eye, intricate details, perched on workshop bench, cinematic lighting, in the style of midjourney
output:
url: images/crow.jfif
- text: word "SDXL" made up of Fire
output:
url: images/sdxl.jfif
base_model: fluently/Fluently-XL-Final
instance_prompt: null
license: apache-2.0
---
# Better SDXL
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/KingNish/Better-Image-XL-Lora/tree/main) them in the Files & versions tab.
|
netcat420/MFANNv0.13.10 | netcat420 | 2024-06-20T00:55:32Z | 422 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.4",
"base_model:netcat420/MFANNv0.13",
"base_model:netcat420/MFANNv0.12",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-19T23:19:44Z | ---
base_model:
- MaziyarPanahi/Llama-3-8B-Instruct-v0.4
- netcat420/MFANNv0.13
- netcat420/MFANNv0.12
library_name: transformers
tags:
- mergekit
- merge
---
# MFANNv0.13.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.4](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.4) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANNv0.13](https://huggingface.co/netcat420/MFANNv0.13)
* [netcat420/MFANNv0.12](https://huggingface.co/netcat420/MFANNv0.12)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANNv0.13
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANNv0.12
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.4
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
CHE-72/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF | CHE-72 | 2024-06-22T18:05:36Z | 422 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-22T18:05:17Z | ---
base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0
language:
- zh
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# CHE-72/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF
This model was converted to GGUF format from [`MediaTek-Research/Breeze-7B-Instruct-v1_0`](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -c 2048
```
|
KoboldAI/fairseq-dense-2.7B-Nerys | KoboldAI | 2022-06-25T11:23:23Z | 421 | 10 | transformers | [
"transformers",
"pytorch",
"xglm",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-13T13:40:07Z | ---
language: en
license: mit
---
# Fairseq-dense 2.7B - Nerys
## Model Description
Fairseq-dense 2.7B-Nerys is a finetune created using Fairseq's MoE dense model.
## Training data
The training data contains around 2500 ebooks in various genres (the "Pike" dataset), a CYOA dataset called "CYS" and 50 Asian "Light Novels" (the "Manga-v1" dataset).
Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]`
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-2.7B-Nerys')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
### Limitations and Biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
### BibTeX entry and citation info
```
Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts
``` |
dwancin/memoji | dwancin | 2024-02-11T12:22:56Z | 421 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"base_model:runwayml/stable-diffusion-v1-5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-06-29T14:12:41Z | ---
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: aimoji
tags:
- stable-diffusion
- text-to-image
- diffusers
license: mit
library_name: diffusers
pipeline_tag: text-to-image
widget:
- text: 'an aimoji portrait of a boy'
output:
url: Memoji - IMG_00001.png
- text: 'an aimoji dog'
output:
url: Memoji - IMG_00002.png
---
# Memoji
Trained on images of [Apple's **Memoji**](https://support.apple.com/HT208986) avatar figures.
## Examples
<Gallery />
## Details
- **Type:** Checkpoint
- **Base Model:** [Stable Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
- **Trigger Word:** `aimoji`
## Sources
This model is available on:
- [Hugging Face](https://huggingface.co/dwancin/memoji)
- [Civitai](https://civitai.com/models/23111) |
vaishali/multitabqa-base-atis | vaishali | 2024-02-20T20:39:09Z | 421 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"multitabqa",
"multi-table-question-answering",
"table-question-answering",
"en",
"dataset:vaishali/atis-tableQA",
"arxiv:2305.12820",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| table-question-answering | 2023-07-18T15:43:32Z | ---
language: en
tags:
- multitabqa
- multi-table-question-answering
license: mit
pipeline_tag: table-question-answering
datasets:
- vaishali/atis-tableQA
---
# MultiTabQA (base-sized model)
MultiTabQA was proposed in [MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering](https://arxiv.org/abs/2305.12820) by Vaishali Pal, Andrew Yates, Evangelos Kanoulas, Maarten de Rijke. The original repo can be found [here](https://github.com/kolk/MultiTabQA).
## Model description
MultiTabQA is a tableQA model which generates the answer table from multiple-input tables. It can handle multi-table operators such as UNION, INTERSECT, EXCEPT, JOINS, etc.
MultiTabQA is based on the TAPEX(BART) architecture, which is a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
## Intended Uses
You can use the raw model SQL execution over multiple input tables. The model has been finetuned on the Atis dataset where it answers natural language questions over multiple input tables.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("vaishali/multitabqa-base-atis")
model = AutoModelForSeq2SeqLM.from_pretrained("vaishali/multitabqa-base-atis")
question = "How many departments are led by heads who are not mentioned?"
table_names = ['department', 'management']
tables=[{"columns":["Department_ID","Name","Creation","Ranking","Budget_in_Billions","Num_Employees"],
"index":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],
"data":[
[1,"State","1789",1,9.96,30266.0],
[2,"Treasury","1789",2,11.1,115897.0],
[3,"Defense","1947",3,439.3,3000000.0],
[4,"Justice","1870",4,23.4,112557.0],
[5,"Interior","1849",5,10.7,71436.0],
[6,"Agriculture","1889",6,77.6,109832.0],
[7,"Commerce","1903",7,6.2,36000.0],
[8,"Labor","1913",8,59.7,17347.0],
[9,"Health and Human Services","1953",9,543.2,67000.0],
[10,"Housing and Urban Development","1965",10,46.2,10600.0],
[11,"Transportation","1966",11,58.0,58622.0],
[12,"Energy","1977",12,21.5,116100.0],
[13,"Education","1979",13,62.8,4487.0],
[14,"Veterans Affairs","1989",14,73.2,235000.0],
[15,"Homeland Security","2002",15,44.6,208000.0]
]
},
{"columns":["department_ID","head_ID","temporary_acting"],
"index":[0,1,2,3,4],
"data":[
[2,5,"Yes"],
[15,4,"Yes"],
[2,6,"Yes"],
[7,3,"No"],
[11,10,"No"]
]
}]
input_tables = [pd.read_json(table, orient="split") for table in tables]
# flatten the model inputs in the format: query + " " + <table_name> : table_name1 + flattened_table1 + <table_name> : table_name2 + flattened_table2 + ...
#flattened_input = question + " " + [f"<table_name> : {table_name} linearize_table(table) for table_name, table in zip(table_names, tables)]
model_input_string = """How many departments are led by heads who are not mentioned? <table_name> : department col : Department_ID | Name | Creation | Ranking | Budget_in_Billions | Num_Employees row 1 : 1 | State | 1789 | 1 | 9.96 | 30266 row 2 : 2 | Treasury | 1789 | 2 | 11.1 | 115897 row 3 : 3 | Defense | 1947 | 3 | 439.3 | 3000000 row 4 : 4 | Justice | 1870 | 4 | 23.4 | 112557 row 5 : 5 | Interior | 1849 | 5 | 10.7 | 71436 row 6 : 6 | Agriculture | 1889 | 6 | 77.6 | 109832 row 7 : 7 | Commerce | 1903 | 7 | 6.2 | 36000 row 8 : 8 | Labor | 1913 | 8 | 59.7 | 17347 row 9 : 9 | Health and Human Services | 1953 | 9 | 543.2 | 67000 row 10 : 10 | Housing and Urban Development | 1965 | 10 | 46.2 | 10600 row 11 : 11 | Transportation | 1966 | 11 | 58.0 | 58622 row 12 : 12 | Energy | 1977 | 12 | 21.5 | 116100 row 13 : 13 | Education | 1979 | 13 | 62.8 | 4487 row 14 : 14 | Veterans Affairs | 1989 | 14 | 73.2 | 235000 row 15 : 15 | Homeland Security | 2002 | 15 | 44.6 | 208000 <table_name> : management col : department_ID | head_ID | temporary_acting row 1 : 2 | 5 | Yes row 2 : 15 | 4 | Yes row 3 : 2 | 6 | Yes row 4 : 7 | 3 | No row 5 : 11 | 10 | No"""
inputs = tokenizer(model_input_string, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# 'col : count(*) row 1 : 11'
```
### How to Fine-tune
Please find the fine-tuning script [here](https://github.com/kolk/MultiTabQA).
### BibTeX entry and citation info
```bibtex
@inproceedings{pal-etal-2023-multitabqa,
title = "{M}ulti{T}ab{QA}: Generating Tabular Answers for Multi-Table Question Answering",
author = "Pal, Vaishali and
Yates, Andrew and
Kanoulas, Evangelos and
de Rijke, Maarten",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.348",
doi = "10.18653/v1/2023.acl-long.348",
pages = "6322--6334",
abstract = "Recent advances in tabular question answering (QA) with large language models are constrained in their coverage and only answer questions over a single table. However, real-world queries are complex in nature, often over multiple tables in a relational database or web page. Single table questions do not involve common table operations such as set operations, Cartesian products (joins), or nested queries. Furthermore, multi-table operations often result in a tabular output, which necessitates table generation capabilities of tabular QA models. To fill this gap, we propose a new task of answering questions over multiple tables. Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers. To enable effective training, we build a pre-training dataset comprising of 132,645 SQL queries and tabular answers. Further, we evaluate the generated tables by introducing table-specific metrics of varying strictness assessing various levels of granularity of the table structure. MultiTabQA outperforms state-of-the-art single table QA models adapted to a multi-table QA setting by finetuning on three datasets: Spider, Atis and GeoQuery.",
}
``` |
lorahub/flan_t5_large-glue_wnli | lorahub | 2023-07-24T09:44:51Z | 421 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T09:44:41Z | ---
library_name: peft
---
|
maddes8cht/mosaicml-mpt-7b-8k-gguf | maddes8cht | 2023-11-01T15:36:50Z | 421 | 0 | null | [
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-30T11:24:19Z | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# mpt-7b-8k - GGUF
- Model creator: [mosaicml](https://huggingface.co/mosaicml)
- Original model: [mpt-7b-8k](https://huggingface.co/mosaicml/mpt-7b-8k)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# MPT-7B-8k
MPT-7B-8k is a decoder-style transformer pretrained starting from MPT-7B, but updating the sequence length to 8k and training for an additional 500B tokens, resulting in a total of 1.5T tokens of text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B-8k is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B-8k is
* **Licensed for the possibility of commercial use.**
* **Trained on a large amount of data** (1.5T tokens like [XGen](https://huggingface.co/Salesforce/xgen-7b-8k-base) vs. 1T for [LLaMA](https://arxiv.org/abs/2302.13971), 1T for [MPT-7B](https://www.mosaicml.com/blog/mpt-7b), 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409). With ALiBi, the model can extrapolate beyond the 8k training sequence length to up to 10k, and with a few million tokens it can be finetuned to extrapolate much further.
* **Capable of fast training and inference** via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B-8k:
The following models are finetuned on MPT-7B-8k:
* [MPT-7B-8k-Instruct](https://huggingface.co/mosaicml/mpt-7b-8k-instruct): a model for long-form instruction following (especially summarization and question-answering).
Built by finetuning MPT-7B-8k on several carefully curated datasets.
* License: _CC-BY-SA-3.0_
* [MPT-7B-8k-Chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B-8k on approximately 1.5B tokens of chat data.
* License: _CC-By-NC-SA-4.0_
## Model Date
July 18, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-8k',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 10000 # (input + output) tokens can now be up to 10000
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-7B-8k tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for ___T tokens. First it was trained for 1T tokens (with batch size 1760 and sequence length 2048) on the following data mix:
#### Data Mix for Original 1T Tokens Used to Train MPT-7B
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
#### Data Mix for Additional 500B Tokens Used to Further Train MPT-7B-8k
We took 80B tokens from document samples that were longer than 4096 tokens, and 120B tokens with varying document sample lengths that matched the "baseline" length distribution for a total of 200B tokens in a single dataset.
We then trained MPT-7B for 500B tokens with a maximum sequence length of 8192, resulting in MPT-7B-8k. Since we trained for 500B tokens using 200B tokens, nearly every subset was trained on for exactly 2.5 epochs.
| Sequence Length Distribution | Number of Tokens in Source (Billion) | Proportion | Effective Number of Tokens (Billion) | Epochs |
|---|---|---|---|---|
| mC4 3.1.0 - English (200+ words) - Baseline | 33.60 | 16.80% | 84.00 | 2.50 |
| mC4 3.1.0 - English (200+ words) - ≥4096 tokens | 23.04 | 11.52% | 57.60 | 2.50 |
| c4 - English - SemDedup 80% - Baseline | 30.12 | 15.06% | 75.30 | 2.50 |
| c4 - English - SemDedup 80% - ≥4096 tokens | 0.92 | 0.46% | 2.30 | 2.50 |
| RedPajama - CommonCrawl - Baseline | 8.52 | 4.26% | 21.30 | 2.50 |
| RedPajama - CommonCrawl - ≥4096 tokens | 12.80 | 6.40% | 32.00 | 2.50 |
| The Stack - Selected Languages - Baseline | 30.00 | 15.00% | 75.00 | 2.50 |
| The Stack - Selected Languages - ≥4096 tokens | 10.00 | 5.00% | 25.00 | 2.50 |
| RedPajama - Wikipedia - Baseline | 3.60 | 1.80% | 9.00 | 2.50 |
| RedPajama - Wikipedia - ≥4096 tokens | 1.04 | 0.52% | 2.60 | 2.50 |
| The Stack - Markdown - Baseline | 4.50 | 2.25% | 11.25 | 2.50 |
| The Stack - Markdown - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 |
| Semantic Scholar ORC - Baseline | 3.30 | 1.65% | 8.25 | 2.50 |
| Semantic Scholar ORC - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 |
| RedPajama - Books - Baseline | 3.00 | 1.50% | 7.50 | 2.50 |
| RedPajama - Books - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 |
| RedPajama - arXiv - Baseline | 1.92 | 0.96% | 4.80 | 2.50 |
| RedPajama - arXiv - ≥4096 tokens | 5.40 | 2.70% | 13.50 | 2.50 |
| RedPajama - StackExchange - Baseline | 1.44 | 0.72% | 3.60 | 2.50 |
| RedPajama - StackExchange - ≥4096 tokens | 1.52 | 1.40% | 7.00 | 4.60 |
| N Training Tokens | 200 | 100.00% | | 2.5 epochs * 200B = 500B tokens |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-8k is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-8k was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
second-state/CodeLlama-13B-Instruct-GGUF | second-state | 2024-03-20T07:22:41Z | 421 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"llama-2",
"code",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-31T10:51:59Z | ---
language:
- code
license: llama2
tags:
- llama-2
model_name: CodeLlama 13B Instruct
base_model: codellama/CodeLlama-13b-Instruct-hf
inference: false
model_creator: Meta
model_type: llama
pipeline_tag: text-generation
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CodeLlama-13B-Instruct
## Original Model
[codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
- Prompt template
- Prompt type: `codellama-instruct`
- Prompt string
```text
<s>[INST] <<SYS>>
Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: <</SYS>>
{prompt} [/INST]
```
- Context size: `5120`
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:CodeLlama-13b-Instruct-hf-Q5_K_M.gguf llama-chat.wasm -p codellama-instruct
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [CodeLlama-13b-Instruct-hf-Q2_K.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q2_K.gguf) | Q2_K | 2 | 5.43 GB| smallest, significant quality loss - not recommended for most purposes |
| [CodeLlama-13b-Instruct-hf-Q3_K_L.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| small, substantial quality loss |
| [CodeLlama-13b-Instruct-hf-Q3_K_M.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| very small, high quality loss |
| [CodeLlama-13b-Instruct-hf-Q3_K_S.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| very small, high quality loss |
| [CodeLlama-13b-Instruct-hf-Q4_0.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [CodeLlama-13b-Instruct-hf-Q4_K_M.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| medium, balanced quality - recommended |
| [CodeLlama-13b-Instruct-hf-Q4_K_S.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| small, greater quality loss |
| [CodeLlama-13b-Instruct-hf-Q5_0.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [CodeLlama-13b-Instruct-hf-Q5_K_M.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| large, very low quality loss - recommended |
| [CodeLlama-13b-Instruct-hf-Q5_K_S.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| large, low quality loss - recommended |
| [CodeLlama-13b-Instruct-hf-Q6_K.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q6_K.gguf) | Q6_K | 6 | 10.7 GB| very large, extremely low quality loss |
| [CodeLlama-13b-Instruct-hf-Q8_0.gguf](https://huggingface.co/second-state/CodeLlama-13B-Instruct-GGUF/blob/main/CodeLlama-13b-Instruct-hf-Q8_0.gguf) | Q8_0 | 8 | 13.8 GB| very large, extremely low quality loss - not recommended |
|
Yntec/Crystalwave | Yntec | 2023-11-11T20:29:11Z | 421 | 5 | diffusers | [
"diffusers",
"safetensors",
"General",
"Style",
"Synthwave",
"Art",
"Render",
"Lineart",
"PublicPrompts",
"WarAnakin",
"RunDiffusion",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-11T06:57:51Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Style
- Synthwave
- Art
- Render
- Lineart
- PublicPrompts
- WarAnakin
- RunDiffusion
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Crystalwave
CrystalClearRemix with Synthwave's style. Use snthwve in the prompt for a stronger effect.
Comparison:

(Click for larger)
Samples and prompts (scroll for more example prompts and their outputs)

sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, 4k. chibi, character, comic art. Concept art on canvas by Clay Mann and ilya kuvshinov

(Click for larger)
Top left: snthwve wireframe neon lines, (dark colors_1.2) with (lines polygonal mountain_1.2) in background, (saturn with ring behind the mountain), (background is (white) starry night)!!, babel in the, tower, (((, barren, realistic minimalistic, intricate, highly detailed, sharp focus, 3D, digital painting, illustration, vaporwave,
Top right: (neon circle floating_1.2) (with palm tree inside it_1.1), above ocean, neon lights, snthwve style wallpaper featured on artstation, artwork by Greg Rutkowski and Beksinski, Esao Andrews, guisser, character design, very coherent symmetrical artwork, by anton fadeev dzimirsky and albrecht anker
Bottom left: (retro volkswagen van_1.1) (attached to hot air balloon_1.2), (dark colors_1.2), snthwve style wallpaper. graffiti on black canvas, digital art, concept art, trending on artstation, HD, 8K, highly detailed, good lighting,lighthouse, epic, masterpiece, surreal, detailed, sharp, 1980s
Bottom right: table with cassette tape with sunglasses on it, table with cassette tape with sunglasses on it, dark colors, snthwve style wallpaper. Polaroid, photography, 4k, photoreal, highly detailed, cinematic lightning

(Click for larger)
Top left: (tank) driving down the street to the sunset, outrun, snthwve style wallpaper, 8k, highly detailed, cinematic lighting, conceptmento, very realistic, clear, super detailed, colorful accents, detailed and intricate, trending on artstation, in the style in the style of artstation, highly realistic, very detailed, 4k, 8k, octane
Top right: pretty cute girl sitting on motorcycle, (facing the camera_1.2), (looking at camera_1.1), neon lines, snthwve style, HD wallpaper, detailed eyes, very beautiful, elegant, cinematic, artstation, octane render, art by art by artgerm and greg rutkowski and magali villeneuve, portrait, elegant, d ho
Bottom left: island with palm trees, island with palm trees, wireframe looking neon lines, snthwve style wallpaper, dorovski, sharp focus, by wlop and artgerm and alphonse mucha, illustration, trending on artstation, very coherent
Bottom right: Tiny cute isometric tank, Tiny cute isometric tank, snthwve soft smooth lighting, with soft colors, 100mm lens, 3d blender render, trending on polycount. render in octane, vray rendering, v-ray, C4D
# Recipe:
- SuperMerger Weight sum Train Difference Use MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1
Model A:
Synthwave
Model B:
Crystal Clear Remix
Output Model:
Crystalwave |
digiplay/asyncsMIX_v5 | digiplay | 2023-12-01T22:15:11Z | 421 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-12-01T21:37:31Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/114807?modelVersionId=240130
Sample image I made generated by huggingface's API :

|
01-ai/Yi-VL-34B | 01-ai | 2024-06-26T10:44:00Z | 421 | 248 | pytorch | [
"pytorch",
"llava",
"image-text-to-text",
"arxiv:2403.04652",
"license:apache-2.0",
"region:us"
]
| image-text-to-text | 2023-12-25T03:26:32Z | ---
license: apache-2.0
library_name: pytorch
tags:
- llava
license_name: yi-license
license_link: LICENSE
pipeline_tag: image-text-to-text
---
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
<img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
</picture>
</div>
<div align="center">
<h1 align="center">Yi Vision Language Model</h1>
</div>
<div align="center">
<h3 align="center">Better Bilingual Multimodal Model</h3>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • 🟣 <a href="https://wisemodel.cn/organization/01.AI" target="_blank">wisemodel</a>
</p>
<p align="center">
👩🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a>!
</p>
<p align="center">
👋 Join us 💬 <a href="https://github.com/01-ai/Yi/issues/43#issuecomment-1827285245" target="_blank"> WeChat (Chinese) </a>!
</p>
<p align="center">
📚 Grow at <a href="https://github.com/01-ai/Yi/blob/main/docs/learning_hub.md"> Yi Learning Hub </a>!
</p>
<hr>
<!-- DO NOT REMOVE ME -->
<details open>
<summary></b>📕 Table of Contents</b></summary>
- [What is Yi-VL?](#what-is-yi-vl)
- [Overview](#overview)
- [Models](#models)
- [Features](#features)
- [Architecture](#architecture)
- [Training](#training)
- [Limitations](#limitations)
- [Why Yi-VL?](#why-yi-vl)
- [Tech report](#tech-report)
- [Benchmarks](#benchmarks)
- [Showcases](#showcases)
- [How to use Yi-VL?](#how-to-use-yi-vl)
- [Quick start](#quick-start)
- [Hardware requirements](#hardware-requirements)
- [Misc.](#misc)
- [Acknowledgements and attributions](#acknowledgements-and-attributions)
- [List of used open-source projects](#list-of-used-open-source-projects)
- [License](#license)
</details>
<hr>
# What is Yi-VL?
## Overview
- **Yi Vision Language (Yi-VL)** model is the open-source, multimodal version of the Yi **Large Language Model (LLM)** series, enabling content comprehension, recognition, and multi-round conversations about images.
- Yi-VL demonstrates exceptional performance, **ranking first** among all existing open-source models in the latest benchmarks including [MMMU](https://mmmu-benchmark.github.io/#leaderboard) in English and [CMMMU](https://mmmu-benchmark.github.io/#leaderboard) in Chinese (based on data available up to January 2024).
- Yi-VL-34B is the **first** open-source 34B vision language model worldwide.
## Models
Yi-VL has released the following versions.
Model | Download
|---|---
Yi-VL-34B |• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-VL-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-VL-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-VL-34B)
Yi-VL-6B | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-VL-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-VL-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-VL-6B)
## Features
Yi-VL offers the following features:
- Multi-round text-image conversations: Yi-VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.
- Bilingual text support: Yi-VL supports conversations in both English and Chinese, including text recognition in images.
- Strong image comprehension: Yi-VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.
- Fine-grained image resolution: Yi-VL supports image understanding at a higher resolution of 448×448.
## Architecture
Yi-VL adopts the [LLaVA](https://github.com/haotian-liu/LLaVA) architecture, which is composed of three primary components:
- Vision Transformer (ViT): it's initialized with [CLIP ViT-H/14 model](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) and used for image encoding.
- Projection Module: it's designed to align image features with text feature space, consisting of a two-layer Multilayer Perceptron (MLP) with layer normalizations.
- Large Language Model (LLM): it's initialized with [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) or [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat), demonstrating exceptional proficiency in understanding and generating both English and Chinese.

## Training
### Training process
Yi-VL is trained to align visual information well to the semantic space of Yi LLM, which undergoes a comprehensive three-stage training process:
- Stage 1: The parameters of ViT and the projection module are trained using an image resolution of 224×224. The LLM weights are frozen. The training leverages an image caption dataset comprising 100 million image-text pairs from [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/). The primary objective is to enhance the ViT's knowledge acquisition within our specified architecture and to achieve better alignment between the ViT and the LLM.
- Stage 2: The image resolution of ViT is scaled up to 448×448, and the parameters of ViT and the projection module are trained. It aims to further boost the model's capability for discerning intricate visual details. The dataset used in this stage includes about 25 million image-text pairs, such as [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/), [CLLaVA](https://huggingface.co/datasets/LinkSoul/Chinese-LLaVA-Vision-Instructions), [LLaVAR](https://llavar.github.io/), [Flickr](https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset), [VQAv2](https://paperswithcode.com/dataset/visual-question-answering-v2-0), [RefCOCO](https://github.com/lichengunc/refer/tree/master), [Visual7w](http://ai.stanford.edu/~yukez/visual7w/) and so on.
- Stage 3: The parameters of the entire model (that is, ViT, projection module, and LLM) are trained. The primary goal is to enhance the model's proficiency in multimodal chat interactions, thereby endowing it with the ability to seamlessly integrate and interpret visual and linguistic inputs. To this end, the training dataset encompasses a diverse range of sources, totalling approximately 1 million image-text pairs, including [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html), [VizWiz VQA](https://vizwiz.org/tasks-and-datasets/vqa/), [TextCaps](https://opendatalab.com/OpenDataLab/TextCaps), [OCR-VQA](https://ocr-vqa.github.io/), [Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/api.html), [LAION GPT4V](https://huggingface.co/datasets/laion/gpt4v-dataset) and so on. To ensure data balancing, we impose a cap on the maximum data contribution from any single source, restricting it to no more than 50,000 pairs.
Below are the parameters configured for each stage.
Stage | Global batch size | Learning rate | Gradient clip | Epochs
|---|---|---|---|---
Stage 1, 2 |4096|1e-4|0.5|1
Stage 3|256|2e-5|1.0|2
### Training resource consumption
- The training consumes 128 NVIDIA A800 (80G) GPUs.
- The total training time amounted to approximately 10 days for Yi-VL-34B and 3 days for Yi-VL-6B.
## Limitations
This is the initial release of the Yi-VL, which comes with some known limitations. It is recommended to carefully evaluate potential risks before adopting any models.
- Feature limitation
- Visual question answering is supported. Other features like text-to-3D and image-to-video are not yet supported.
- A single image rather than several images can be accepted as an input.
- Hallucination problem
- There is a certain possibility of generating content that does not exist in the image.
- In scenes containing multiple objects, some objects might be incorrectly identified or described with insufficient detail.
- Resolution issue
- Yi-VL is trained on images with a resolution of 448×448. During inference, inputs of any resolution are resized to 448×448. Low-resolution images may result in information loss, and more fine-grained images (above 448) do not bring in extra knowledge.
- Other limitations of the Yi LLM.
# Why Yi-VL?
## Tech report
For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652).
### Citation
```
@misc{ai2024yi,
title={Yi: Open Foundation Models by 01.AI},
author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai},
year={2024},
eprint={2403.04652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Benchmarks
Yi-VL outperforms all existing open-source models in [MMMU](https://mmmu-benchmark.github.io) and [CMMMU](https://cmmmu-benchmark.github.io), two advanced benchmarks that include massive multi-discipline multimodal questions (based on data available up to January 2024).
- MMMU

- CMMMU

## Showcases
Below are some representative examples of detailed description and visual question answering, showcasing the capabilities of Yi-VL.
- English

- Chinese

# How to use Yi-VL?
## Quick start
Please refer to [Yi GitHub Repo](https://github.com/01-ai/Yi/tree/main/VL) for details.
## Hardware requirements
For model inference, the recommended GPU examples are:
- Yi-VL-6B: RTX 3090, RTX 4090, A10, A30
- Yi-VL-34B: 4 × RTX 4090, A800 (80 GB)
# Misc.
## Acknowledgements and attributions
This project makes use of open-source software/components. We acknowledge and are grateful to these developers for their contributions to the open-source community.
### List of used open-source projects
1. LLaVA
- Authors: Haotian Liu, Chunyuan Li, Qingyang Wu, Yuheng Li, and Yong Jae Lee
- Source: https://github.com/haotian-liu/LLaVA
- License: Apache-2.0 license
- Description: The codebase is based on LLaVA code.
2. OpenClip
- Authors: Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt
- Source: https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K
- License: MIT
- Description: The ViT is initialized using the weights of OpenClip.
**Notes**
- This attribution does not claim to cover all open-source components used. Please check individual components and their respective licenses for full details.
- The use of the open-source components is subject to the terms and conditions of the respective licenses.
We appreciate the open-source community for their invaluable contributions to the technology world.
## License
Please refer to the [acknowledgments and attributions](#acknowledgments_and_attributions) as well as individual components, for the license of source code.
The Yi series models are fully open for academic research and free for commercial use, permissions of which are automatically granted upon application.
All usage must adhere to the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE).
For free commercial use, you only need to send an email to get official commercial permission. |
saracandu/llama-2-7b-LoRA-64-harrypotter | saracandu | 2024-02-23T15:46:52Z | 421 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-23T15:38:34Z | Entry not found |
mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF | mradermacher | 2024-05-06T05:37:48Z | 421 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/SOLAR-Maid-4x10.7B",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-29T10:01:39Z | ---
base_model: NeverSleep/SOLAR-Maid-4x10.7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
weighted/imatrix quants of https://huggingface.co/NeverSleep/SOLAR-Maid-4x10.7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ2_S.gguf) | i1-IQ2_S | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ2_M.gguf) | i1-IQ2_M | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q2_K.gguf) | i1-Q2_K | 13.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ3_S.gguf) | i1-IQ3_S | 15.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ3_M.gguf) | i1-IQ3_M | 16.1 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 19.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 20.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q4_0.gguf) | i1-Q4_0 | 20.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 22.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-Maid-4x10.7B-i1-GGUF/resolve/main/SOLAR-Maid-4x10.7B.i1-Q6_K.gguf) | i1-Q6_K | 29.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
vwxyzjn/rm_zephyr | vwxyzjn | 2024-06-07T03:22:07Z | 421 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2024-05-05T19:36:40Z | ---
license: apache-2.0
tags:
- trl
- reward-trainer
- generated_from_trainer
base_model: alignment-handbook/zephyr-7b-sft-full
metrics:
- accuracy
model-index:
- name: rm_zephyr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rm_zephyr
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4657
- Accuracy: 0.7599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5368 | 0.4290 | 100 | 0.4863 | 0.7537 |
| 0.4497 | 0.8580 | 200 | 0.4657 | 0.7599 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BAAI/Bunny-v1_1-4B | BAAI | 2024-06-24T13:42:10Z | 421 | 23 | transformers | [
"transformers",
"safetensors",
"bunny-phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2402.11530",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-05-08T12:01:05Z | ---
inference: false
license: apache-2.0
---
# Model Card
<p align="center">
<img src="./icon.png" alt="Logo" width="350">
</p>
📖 [Technical report](https://arxiv.org/abs/2402.11530) | 🏠 [Code](https://github.com/BAAI-DCAI/Bunny) | 🐰 [Demo](http://bunny.baai.ac.cn)
This is Bunny-v1.1-4B.
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Phi-3-mini, Llama-3-8B, Phi-1.5, StableLM-2 and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
We provide Bunny-v1.1-4B, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) with [S \\(^{2}\\)-Wrapper](https://github.com/bfshi/scaling_on_scales), supporting 1152x1152 resolution. More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny).

# Quickstart
Here we show a code snippet to show you how to use the model with transformers.
Before running the snippet, you need to install the following dependencies:
```shell
pip install torch transformers accelerate pillow
```
If the CUDA memory is enough, it would be faster to execute this snippet by setting `CUDA_VISIBLE_DEVICES=0`.
Users especially those in Chinese mainland may want to refer to a HuggingFace [mirror site](https://hf-mirror.com).
```python
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings
# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')
# set device
device = 'cuda' # or cpu
torch.set_default_device(device)
# create model
model = AutoModelForCausalLM.from_pretrained(
'BAAI/Bunny-v1_1-4B',
torch_dtype=torch.float16, # float32 for cpu
device_map='auto',
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
'BAAI/Bunny-v1_1-4B',
trust_remote_code=True)
# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][1:], dtype=torch.long).unsqueeze(0).to(device)
# image, sample images can be found in images folder
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device)
# generate
output_ids = model.generate(
input_ids,
images=image_tensor,
max_new_tokens=100,
use_cache=True,
repetition_penalty=1.0 # increase this to avoid chattering
)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
|
ucalyptus/prem-615M-chat | ucalyptus | 2024-05-09T22:30:15Z | 421 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:premai-io/prem-1B-chat",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-09T22:29:43Z | ---
base_model:
- premai-io/prem-1B-chat
library_name: transformers
tags:
- mergekit
- merge
---
# output-model-directory
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [premai-io/prem-1B-chat](https://huggingface.co/premai-io/prem-1B-chat)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 11]
model: premai-io/prem-1B-chat
- layer_range: [11, 22]
model: premai-io/prem-1B-chat
merge_method: passthrough
dtype: float16
```
|
mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF | mradermacher | 2024-05-16T21:55:22Z | 421 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:crestf411/llama-3-daybreak-storywriter-v0.2-70b-hf",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-14T09:28:53Z | ---
base_model: crestf411/llama-3-daybreak-storywriter-v0.2-70b-hf
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/crestf411/llama-3-daybreak-storywriter-v0.2-70b-hf
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-daybreak-storywriter-v0.2-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-storywriter-v0.2-70b-hf.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ukung/DeepSeek-SQL-Expert-GGUF | ukung | 2024-05-20T14:59:58Z | 421 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-20T14:36:54Z | ---
license: apache-2.0
---
|
John6666/yamers-anime-sa-sdxl | John6666 | 2024-05-26T13:52:18Z | 421 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-05-26T13:47:48Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://civitai.com/models/76489/sdxl-yamers-anime-unstable-illustrator).
|
BHOSAI/Pichilti-base-v1 | BHOSAI | 2024-05-27T18:43:15Z | 421 | 2 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"Automatic Speech Recognition",
"Azerbaijani",
"Speech-to-text",
"az",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-05-26T18:15:49Z | ---
license: cc-by-sa-4.0
language:
- az
tags:
- Automatic Speech Recognition
- Azerbaijani
- Speech-to-text
---
<h1> Pichilti-base for Automatic Speech Recognition in Azerbaijani</h1>
Baku Higher Oil School Research and Development Center on AI introduces their Research on creating better Whisper model on monolingual basis. This model takes the input in audio format and converts it to the text. Model has been trained in self-supervised way with over 500 000 audios without any labels. This helped us to bypass the heavy labelling procedure.
Model is originally pre-trained on multilingual base by OpenAI. Original Model is called Whisper while it has variations like tiny, base, small, medium, large-v2, large-v3. Complexity of the model increases with its computational cost. Therefore for large scale operations smaller versions are preffered while in accuracy critical operations large models are in the production.
Considering the power of pretraining in the Whisper models, we decided to keep the encoder of the model as it is because we tested that the model encoder is very robust to noise to generate necessary audio features. The main reason is that model has been trained on 680 000 hours of data with zero-shot learning. This is a lot of power for the stability of the model. However the decoder part has multimodal features like translation and transcription, we decided to take the model, freeze the encoder and fine-tune the model based on self-supervised learning. This gave us better CER (Character error rate) while the computational cost decreased significantly.
Due to ongoing research, details of training will be published after the acceptance of the paper.
<h1> Try it out </h1>
In order to try this code in your own server or PC first 2 packages should be downloaded:
```console
& pip install openai-whisper
& pip install transformers
```
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from whisper import load_audio
waveform = load_audio("test.mp3")
processor = WhisperProcessor.from_pretrained("BHOSAI/Pichilti-base-v1")
model = WhisperForConditionalGeneration.from_pretrained("BHOSAI/Pichilti-base-v1")
input_features = processor(
waveform, return_tensors="pt"
).input_features
# Generate token ids
predicted_ids = model.generate(input_features)
# Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription)
```
<h1> About Us </h1>
Baku Higher Oil School Research and Development Center on AI is a team of students who have passion to contribute to the open-source community of the Azerbaijani NLP products. Center is based in Azerbaijan, Baku.
|
MaziyarPanahi/albertlight-7b-GGUF | MaziyarPanahi | 2024-05-29T13:05:28Z | 421 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"llama",
"text-generation",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:AgentPublic/albertlight-7b"
]
| text-generation | 2024-05-29T12:13:39Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- llama
- text-generation
- fr
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: albertlight-7b-GGUF
base_model: AgentPublic/albertlight-7b
inference: false
model_creator: AgentPublic
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/albertlight-7b-GGUF](https://huggingface.co/MaziyarPanahi/albertlight-7b-GGUF)
- Model creator: [AgentPublic](https://huggingface.co/AgentPublic)
- Original model: [AgentPublic/albertlight-7b](https://huggingface.co/AgentPublic/albertlight-7b)
## Description
[MaziyarPanahi/albertlight-7b-GGUF](https://huggingface.co/MaziyarPanahi/albertlight-7b-GGUF) contains GGUF format model files for [AgentPublic/albertlight-7b](https://huggingface.co/AgentPublic/albertlight-7b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
yyupenn/whylesionclip | yyupenn | 2024-06-07T13:44:29Z | 421 | 0 | open_clip | [
"open_clip",
"clip",
"zero-shot-image-classification",
"arxiv:2405.14839",
"arxiv:1608.06993",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2024-06-06T21:54:01Z | ---
license: mit
widget:
- src: >-
https://www.verywellhealth.com/thmb/yCmWnfp16fvO0C7qB2jUUcUq8XY=/1500x0/filters:no_upscale():max_bytes(150000):strip_icc()/GettyImages-1451577990-07db16e0a41043bc968d5cbf2dbaec83.jpg
candidate_labels: dark color, light color, irregular shape
example_title: Malignant Melanoma
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model Card for WhyLesionCLIP 👍🏽
# Table of Contents
1. [Model Details](#model-details)
2. [Get Started](#how-to-get-started-with-the-model)
3. [Uses](#uses)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Citation](#citation)
## Model Details
WhyLesionCLIP can align skin lesion images with text descriptions. It is fine-tuned from [OpenCLIP (ViT-L/14)](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K) on [ISIC](https://gallery.isic-archive.com/#!/topWithHeader/onlyHeaderTop/gallery) with clinical reports generated by GPT-4V. WhyLesionCLIP significantly outperforms PubMedCLIP, BioMedCLIP, etc. in zero-shot and linear probing on various skin lesion datasets. (See results in [Evaluation](#evaluation)) While our CLIP models excel with careful data curation, training converges quickly, suggesting the current contrastive objective might not fully exploit the information from the data, potentially taking shortcuts, such as comparing images from different patients instead of focusing on diseases. Future research should explore more suitable objectives and larger-scale data collections to develop more robust medical foundation models.
- **Paper:** https://arxiv.org/pdf/2405.14839
- **Website:** https://yueyang1996.github.io/knobo/
- **Repository:** https://github.com/YueYANG1996/KnoBo
## How to Get Started with the Model
Use the code below to get started with the model.
```bash
pip install open_clip_torch
```
```python
import torch
from PIL import Image
import open_clip
model, _, preprocess = open_clip.create_model_and_transforms("hf-hub:yyupenn/whylesionclip")
model.eval()
tokenizer = open_clip.get_tokenizer("ViT-L-14")
image = preprocess(Image.open("test_skin.jpg")).unsqueeze(0)
text = tokenizer(["dark brown", "bleeding", "irregular shape"])
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
## Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot medical image (skin lesion) classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models.
### Direct Use
WhyLesionCLIP can be used for zero-shot skin lesion classification. You can use it to compute the similarity between an skin lesion image and a text description.
### Downstream Use
WhyLesionCLIP can be used as a feature extractor for downstream tasks. You can use it to extract features from skin lesion images and text descriptions for other downstream tasks.
### Out-of-Scope Use
WhyLesionCLIP should not be used for clinical diagnosis or treatment. It is not intended to be used for any clinical decision-making. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training Details
### Training Data
We employ the [ISIC](https://gallery.isic-archive.com/#!/topWithHeader/onlyHeaderTop/gallery) dataset and use GPT-4V to generate clinical reports for 56,590 images. We preprocess these reports by extracting medically relevant findings, each described in a short and concise term. In total, we assemble 438K image-text pairs for training WhyLesionCLIP.
### Training Details
We utilize the training script from [OpenCLIP](https://github.com/mlfoundations/open_clip) and select [ViT-L/14](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K) as the backbone. Training is performed on 4 RTX A6000 GPUs for 10 epochs with a batch size of 128 and a learning rate of 1e−5. We choose checkpoints based on the lowest contrastive loss on validation sets.
## Evaluation
### Testing Data
We evaluate on 5 skin lesion classification datasets: [HAM10000](https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000), [BCN20000](https://challenge.isic-archive.com/landing/2019/), [PAD-UFES-20](https://www.kaggle.com/datasets/mahdavi1202/skin-cancer), [Melanoma](https://www.kaggle.com/datasets/hasnainjaved/melanoma-skin-cancer-dataset-of-10000-images), and [UWaterloo](https://uwaterloo.ca/vision-image-processing-lab/research-demos/skin-cancer-detection). We report the zero-shot and linear probing accuracy on the above 5 datasets.
### Baselines
We compare various CLIP models, including [OpenAI-CLIP](https://huggingface.co/openai/clip-vit-large-patch14), [OpenCLIP](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K), [PubMedCLIP](https://huggingface.co/flaviagiammarino/pubmed-clip-vit-base-patch32), [BioMedCLIP](https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224), [PMC-CLIP](https://huggingface.co/ryanyip7777/pmc_vit_l_14) and [MedCLIP](https://github.com/RyanWangZf/MedCLIP). We evaluate these models in both zero-shot and linear probe scenarios. In zero-shot, GPT-4 generates prompts for each class, and we use the ensemble of cosine similarities between the image and prompts as the score for each class. In linear probing, we use the CLIP models as image encoders to extract features for logistic regression. Additionally, we include [DenseNet-121](https://arxiv.org/pdf/1608.06993) (fine-tuned on the pretraining datasets with cross-entropy loss) as a baseline for linear probing.
### Results
The figure below shows the averaged Zero-shot and Linear Probe performance of different models on five skin lesion datasets.

## Citation
Please cite our paper if you use this model in your work:
```
@article{yang2024textbook,
title={A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis},
author={Yue Yang and Mona Gandhi and Yufei Wang and Yifan Wu and Michael S. Yao and Chris Callison-Burch and James C. Gee and Mark Yatskar},
journal={arXiv preprint arXiv:2405.14839},
year={2024}
}
``` |
Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q5_K_M-GGUF | Tanvir1337 | 2024-06-26T10:34:26Z | 421 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:Severian/Internal-Knowledge-Map",
"base_model:Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-26T10:34:05Z | ---
base_model: Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B
datasets:
- Severian/Internal-Knowledge-Map
language:
- en
license: mit
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- llama-cpp
- gguf-my-repo
---
# Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B`](https://huggingface.co/Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q5_K_M-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q5_K_M-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q5_K_M-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q5_K_M-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q5_k_m.gguf -c 2048
```
|
keras-io/deeplabv3p-resnet50 | keras-io | 2022-02-11T08:52:59Z | 420 | 3 | keras | [
"keras",
"computer-vision",
"image-segmentation",
"arxiv:1811.12596",
"arxiv:1802.02611",
"arxiv:1706.05587",
"arxiv:1606.00915",
"license:cc0-1.0",
"region:us"
]
| image-segmentation | 2022-03-02T23:29:05Z | ---
tags:
- computer-vision
- image-segmentation
license:
- cc0-1.0
library_name: keras
---
## Multiclass semantic segmentation using DeepLabV3+
This repo contains the model and the notebook [to this Keras example on Multiclass semantic segmentation using DeepLabV3+](https://keras.io/examples/vision/deeplabv3_plus/).
Full credits to: [Soumik Rakshit](http://github.com/soumik12345)
The model is trained for demonstrative purposes and does not guarantee the best results in production. For better results, follow & optimize the [Keras example]((https://keras.io/examples/vision/deeplabv3_plus/) as per your need.
## Background Information
Semantic segmentation, with the goal to assign semantic labels to every pixel in an image, is an essential computer vision task. In this example, we implement the DeepLabV3+ model for multi-class semantic segmentation, a fully-convolutional architecture that performs well on semantic segmentation benchmarks.
## Training Data
The model is trained on a subset (10,000 images) of [Crowd Instance-level Human Parsing Dataset](https://arxiv.org/abs/1811.12596). The Crowd Instance-level Human Parsing (CIHP) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations for 20 categories, as well as instance-level identification. This dataset can be used for the "human part segmentation" task.
## Model
The model uses ResNet50 pretrained on ImageNet as the backbone model.
References:
1. [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/pdf/1802.02611.pdf)
2. [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
3. [DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs](https://arxiv.org/abs/1606.00915) |
samrawal/medical-sentence-tokenizer | samrawal | 2022-05-30T19:12:19Z | 420 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-05-24T22:05:09Z | ---
license: apache-2.0
---
`clinitokenizer` is a sentence tokenizer for clinical text to split unstructured text from clinical text (such as Electronic Medical Records) into individual sentences.
To use this model, see the [clinitokenizer repository](https://github.com/clinisift/clinitokenizer).
General English sentence tokenizers are often unable to correctly parse medical abbreviations, jargon, and other conventions often used in medical records (see "Motivating Examples" section below). clinitokenizer is specifically trained on medical record data and can perform better in these situations (conversely, for non-domain specific use, using more general sentence tokenizers may yield better results).
The model has been trained on multiple datasets provided by [i2b2 (now n2c2)](https://n2c2.dbmi.hms.harvard.edu). Please visit the n2c2 site to request access to the dataset.
|
scite/roberta-base-squad2-nq-bioasq | scite | 2022-10-04T16:10:49Z | 420 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-10-04T14:49:13Z | ---
license: apache-2.0
tags:
- question-answering
- generated_from_trainer
model-index:
- name: roberta-base-squad2-nq-bioasq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-nq-bioasq
## Model description
This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on the BioASQ 10b dataset.
## Intended uses & limitations
Cross-domain question answering!
## Training and evaluation data
Training: BioASQ 10B with SQUAD sampled evenly to match the same samples as BioASQ 10B
Eval: BioASQ 9B Eval with SQUAD Eval sampled evenly to match the same samples as BioASQ 9B Eval
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
Went from untrained exact match: 60.9% (f1 71.8%) to exact match: 95.2% (96.6% f1) on BioASQ 9B held out training set.
Scores on SQUAD+BioASQ remained stable at exact match: 72.5% (f1 81.4%) to 88.5% (f1 93.3%).
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
qilowoq/AbLang_heavy | qilowoq | 2024-02-11T14:05:24Z | 420 | 6 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"chemistry",
"biology",
"protein",
"antibodies",
"antibody",
"heavy chain",
"AbLang",
"CDR",
"OAS",
"sentence-similarity",
"custom_code",
"license:bsd",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2023-04-29T00:38:22Z | ---
license: bsd
tags:
- chemistry
- biology
- protein
- antibodies
- antibody
- heavy chain
- AbLang
- CDR
- OAS
pipeline_tag: sentence-similarity
---
### AbLang model for heavy chains
This is a 🤗 version of AbLang: A language model for antibodies. It was introduced in
[this paper](https://doi.org/10.1101/2022.01.20.477061) and first released in
[this repository](https://github.com/oxpig/AbLang). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
### Intended uses & limitations
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks (TBA).
### How to use
Here is how to use this model to get the features of a given antibody sequence in PyTorch:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('qilowoq/AbLang_heavy')
model = AutoModel.from_pretrained('qilowoq/AbLang_heavy', trust_remote_code=True)
sequence_Example = ' '.join("EVQLQESGPGLVKPSETLSLTCTVSGGPINNAYWTWIRQPPGKGLEYLGYVYHTGVTNYNPSLKSRLTITIDTSRKQLSLSLKFVTAADSAVYYCAREWAEDGDFGNAFHVWGQGTMVAVSSASTKGPSVFPLAPSSKSTSGGTAALGCL")
encoded_input = tokenizer(sequence_Example, return_tensors='pt')
model_output = model(**encoded_input)
```
Sequence embeddings can be produced as follows:
```python
def get_sequence_embeddings(encoded_input, model_output):
mask = encoded_input['attention_mask'].float()
d = {k: v for k, v in torch.nonzero(mask).cpu().numpy()} # dict of sep tokens
# make sep token invisible
for i in d:
mask[i, d[i]] = 0
mask[:, 0] = 0.0 # make cls token invisible
mask = mask.unsqueeze(-1).expand(model_output.last_hidden_state.size())
sum_embeddings = torch.sum(model_output.last_hidden_state * mask, 1)
sum_mask = torch.clamp(mask.sum(1), min=1e-9)
return sum_embeddings / sum_mask
seq_embeds = get_sequence_embeddings(encoded_input, model_output)
```
### Fine-tune
To save memory we recomend using [LoRA](https://doi.org/10.48550/arXiv.2106.09685):
```python
pip install git+https://github.com/huggingface/peft.git
pip install loralib
```
LoRA greatly reduces the number of trainable parameters and performs on-par or better than fine-tuning full model.
```python
from peft import LoraConfig, get_peft_model
def apply_lora_bert(model):
config = LoraConfig(
r=8, lora_alpha=32,
lora_dropout=0.3,
target_modules=['query', 'value']
)
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable() # reduce number of stored activations
model.enable_input_require_grads()
model = get_peft_model(model, config)
return model
model = apply_lora_bert(model)
model.print_trainable_parameters()
# trainable params: 294912 || all params: 85493760 || trainable%: 0.3449514911965505
```
### Citation
```
@article{Olsen2022,
title={AbLang: An antibody language model for completing antibody sequences},
author={Tobias H. Olsen, Iain H. Moal and Charlotte M. Deane},
journal={bioRxiv},
doi={https://doi.org/10.1101/2022.01.20.477061},
year={2022}
}
``` |
TheBloke/WizardLM-13B-1.0-GGUF | TheBloke | 2023-09-27T12:52:57Z | 420 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2304.12244",
"base_model:WizardLM/WizardLM-13B-V1.0",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T00:35:06Z | ---
license: other
model_name: WizardLM 13B 1.0
base_model: WizardLM/WizardLM-13B-V1.0
inference: false
model_creator: WizardLM
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WizardLM 13B 1.0 - GGUF
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
- Original model: [WizardLM 13B 1.0](https://huggingface.co/WizardLM/WizardLM-13B-V1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [WizardLM's WizardLM 13B 1.0](https://huggingface.co/WizardLM/WizardLM-13B-V1.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardLM-13B-1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF)
* [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [wizardLM-13B-1.0.Q2_K.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [wizardLM-13B-1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [wizardLM-13B-1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [wizardLM-13B-1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [wizardLM-13B-1.0.Q4_0.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [wizardLM-13B-1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [wizardLM-13B-1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [wizardLM-13B-1.0.Q5_0.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [wizardLM-13B-1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [wizardLM-13B-1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [wizardLM-13B-1.0.Q6_K.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [wizardLM-13B-1.0.Q8_0.gguf](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGUF/blob/main/wizardLM-13B-1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WizardLM-13B-1.0-GGUF and below it, a specific filename to download, such as: wizardLM-13B-1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WizardLM-13B-1.0-GGUF wizardLM-13B-1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WizardLM-13B-1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-13B-1.0-GGUF wizardLM-13B-1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m wizardLM-13B-1.0.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-13B-1.0-GGUF", model_file="wizardLM-13B-1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: WizardLM's WizardLM 13B 1.0
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# WizardLM 13B 1.0 fp16
These files are fp16 unquantised format model files for [WizardLM 13B 1.0](https://huggingface.co/victor123/WizardLM-13B-1.0).
It is the result of merging the deltas provided in the above repo.
## Need support? Want to discuss? I now have a Discord!
Join me at: https://discord.gg/UBgz4VXf
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGML)
* [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/WizardLM-13B-1.0-HF)
## Prompt Template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: prompt goes here
ASSISTANT:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card
## WizardLM: An Instruction-following LLM Using Evol-Instruct
Empowering Large Pre-Trained Language Models to Follow Complex Instructions
<p align="center" width="100%">
<a ><img src="imgs/WizardLM.png" alt="WizardLM" style="width: 20%; min-width: 300px; display: block; margin: auto;"></a>
</p>
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
[](https://www.python.org/downloads/release/python-390/)
## News
At present, our core contributors are preparing the **33B** version and we expect to empower WizardLM with the ability to perform instruction evolution itself, aiming to evolve your specific data at a low cost.
- 🔥 We released **13B** version of **WizardLM** trained with **250k** evolved instructions (from ShareGPT). Checkout the [Demo_13B](https://a6d4f31b5a1ee33f.gradio.app/), [Demo_13B_bak](https://e79c80d2c2379e77.gradio.app) and the GPT-4 evaluation. Please download our delta model at the following [link](https://huggingface.co/victor123/WizardLM-13B-1.0).
- 🔥 We released **7B** version of **WizardLM** trained with **70k** evolved instructions (from Alpaca data). Checkout the [paper](https://arxiv.org/abs/2304.12244) and [Demo_7B](https://f195ccdce69a86d5.gradio.app) , [Demo_7B_bak](https://ce25bd0feced0f77.gradio.app)
- 📣 We are looking for highly motivated students to join us as interns to create more intelligent AI together. Please contact [email protected]
<!-- Although on our **complexity-balanced test set**, **WizardLM-7B has more cases that are preferred by human labelers than ChatGPT** in the high-complexity instructions (difficulty level >= 8), it still lags behind ChatGPT on the entire test set, and we also consider WizardLM to still be in a **baby state**. This repository will **continue to improve WizardLM**, train on larger scales, add more training data, and innovate more advanced large-model training methods. -->
<b>Note for 13B model usage:</b> To obtain results **identical to our demo**, please strictly follow the prompts and invocation methods provided in the **"src/infer_wizardlm13b.py"** to use our 13B model for inference. Unlike the 7B model, the 13B model adopts the prompt format from Vicuna and supports **multi-turn** conversation.
<b>Note for demo usage:</b> We only recommend using **English** to experience our model. Support for other languages will be introduced in the future. The demo currently only supports **single-turn** conversation.
### GPT-4 automatic evaluation
We adopt the automatic evaluation framework based on GPT-4 proposed by FastChat to assess the performance of chatbot models. As shown in the following figure, WizardLM-13B achieved better results than Vicuna-13b.
<p align="center" width="100%">
<a ><img src="imgs/WizarLM13b-GPT4.png" alt="WizardLM" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
### WizardLM-13B performance on different skills.
The following figure compares WizardLM-13B and ChatGPT’s skill on Evol-Instruct testset. The result indicates that WizardLM-13B achieves 89.1% of ChatGPT’s performance on average, with almost 100% (or more than) capacity on 10 skills, and more than 90% capacity on 22 skills.
<p align="center" width="100%">
<a ><img src="imgs/evol-testset_skills-13b.png" alt="WizardLM" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Call for Feedbacks
We welcome everyone to use your professional and difficult instructions to evaluate WizardLM, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardLM. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.
## Unofficial Video Introductions
Thanks to the enthusiastic friends, their video introductions are more lively and interesting.
1. [GET WizardLM NOW! 7B LLM KING That Can Beat ChatGPT! I'm IMPRESSED!](https://www.youtube.com/watch?v=SaJ8wyKMBds)
2. [WizardLM: Enhancing Large Language Models to Follow Complex Instructions](https://www.youtube.com/watch?v=I6sER-qivYk)
## Case Show
We just sample some cases to demonstrate the performance of WizardLM and ChatGPT on data of varying difficulty, and the details pls refer [Case Show](https://github.com/nlpxucan/WizardLM/blob/main/src/case_show.md).
## Overview of Evol-Instruct
[Evol-Instruct](https://github.com/nlpxucan/evol-instruct) is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.
<p align="center" width="100%">
<a ><img src="imgs/git_overall.png" alt="WizardLM" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
</p>
<p align="center" width="100%">
<a ><img src="imgs/git_running.png" alt="WizardLM" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Contents
1. [Online Demo](#online-demo)
2. [Training Data](#training-data)
3. [WizardLM Weights](#wizardlm-weights)
4. [Fine-tuning](#fine-tuning)
5. [Distributed Fine-tuning](#distributed-Fine-tuning)
6. [Inference](#inference)
7. [Evaluation](#evaluation)
8. [Citation](#citation)
9. [Disclaimer](#disclaimer)
## Online Demo
We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.
[Demo Link](https://011fc8477ad734d7.gradio.app)
[Demo Backup 1](https://1825e531c43a23c7.gradio.app)
## Training Data
[`alpaca_evol_instruct_70k.json`](https://huggingface.co/datasets/victor123/evol_instruct_70k) contains 70K instruction-following data generated from Evol-Instruct. We used it for fine-tuning the WizardLM model.
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `str`, describes the task the model should perform. Each of the 70K instructions is unique.
- `output`: `str`, the answer to the instruction as generated by `gpt-3.5-turbo`.
## WizardLM Weights
We release [WizardLM] weights as delta weights to comply with the LLaMA model license.
You can add our delta to the original LLaMA weights to obtain the WizardLM weights. Instructions:
1. Get the original LLaMA weights in the huggingface format by following the instructions [here](https://huggingface.co/docs/transformers/main/model_doc/llama).
2. Please download our delta model at the following [link](https://huggingface.co/victor123/WizardLM)
3. Use the following scripts to get WizardLM weights by applying our delta:
```
python src/weight_diff_wizard.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights>
```
## Fine-tuning
We fine-tune WizardLM using code from [Llama-X](https://github.com/AetherCortex/Llama-X).
We fine-tune LLaMA-7B and LLaMA-13B with the following hyperparameters:
| Hyperparameter | LLaMA-7B | LLaMA-13B|
|----------------|----------|----------|
| Batch size | 64 | 384 |
| Learning rate | 2e-5 | 2e-5 |
| Epochs | 3 | 3 |
| Max length | 2048 | 2048 |
| Warmup step | 2 | 50 |
| LR scheduler | cosine | cosine |
To reproduce our fine-tuning of WizardLM, please follow the following steps:
1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy.
2. Replace the train.py with the train_freeform.py in our repo(src/train_freeform.py)
3. Execute the following training command:
```bash
deepspeed train_freeform.py \
--model_name_or_path /path/to/llama-7B/hf \
--data_path /path/to/alpaca_evol_instruct_70k.json \
--output_dir /path/to/wizardlm-7B/hf/ft \
--num_train_epochs 3 \
--model_max_length 2048 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 800 \
--save_total_limit 3 \
--learning_rate 2e-5 \
--warmup_steps 2 \
--logging_steps 2 \
--lr_scheduler_type "cosine" \
--report_to "tensorboard" \
--gradient_checkpointing True \
--deepspeed configs/deepspeed_config.json \
--fp16 True
```
## Distributed Fine-tuning
See [Distributed Fine-tuning](./doc/distributed_finetune.md)
## Inference
We provide the decoding script for WizardLM, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.
You can specify `base_model`, `input_data_path` and `output_data_path` in src\inference_wizardlm.py to set the decoding model, path of input file and path of output file.
The decoding command:
```
python src\inference_wizardlm.py
```
### Evaluation
To evaluate Wizard, we conduct human evaluation on the inputs from our human instruct evaluation set [`WizardLM_testset.jsonl`](./data/WizardLM_testset.jsonl) . This evaluation set was collected by the authors and covers a diverse list of user-oriented instructions including difficult Coding Generation & Debugging, Math, Reasoning, Complex Formats, Academic Writing, Extensive Disciplines, and so on. We performed a blind pairwise comparison between Wizard and baselines. Specifically, we recruit 10 well-educated annotators to rank the models from 1 to 5 on relevance, knowledgeable, reasoning, calculation and accuracy.
WizardLM achieved significantly better results than Alpaca and Vicuna-7b.
<p align="center" width="60%">
<a ><img src="imgs/win.png" alt="WizardLM" style="width: 60%; min-width: 300px; display: block; margin: auto;"></a>
</p>
In the high-difficulty section of our test set (difficulty level >= 8), WizardLM even outperforms ChatGPT, with a win rate 7.9% larger than Chatgpt (42.9% vs. 35.0%). This indicates that our method can significantly improve the ability of large language models to handle complex instructions.
<p align="center" width="60%">
<a ><img src="imgs/windiff.png" alt="WizardLM" style="width: 60%; min-width: 300px; display: block; margin: auto;"></a>
</p>
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{xu2023wizardlm,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Disclaimer
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardLM is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
<!-- original-model-card end -->
|
TheBloke/Athena-v3-GGUF | TheBloke | 2023-09-27T17:23:14Z | 420 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:IkariDev/Athena-v3",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-27T17:14:20Z | ---
base_model: IkariDev/Athena-v3
inference: false
license: cc-by-nc-4.0
model_creator: IkariDev and Undi95
model_name: Athena V3
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Athena V3 - GGUF
- Model creator: [IkariDev and Undi95](https://huggingface.co/IkariDev)
- Original model: [Athena V3](https://huggingface.co/IkariDev/Athena-v3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [IkariDev and Undi95's Athena V3](https://huggingface.co/IkariDev/Athena-v3).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v3-GGUF)
* [IkariDev and Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [IkariDev and Undi95's Athena V3](https://huggingface.co/IkariDev/Athena-v3).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [athena-v3.Q2_K.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [athena-v3.Q3_K_S.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [athena-v3.Q3_K_M.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [athena-v3.Q3_K_L.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [athena-v3.Q4_0.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [athena-v3.Q4_K_S.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [athena-v3.Q4_K_M.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [athena-v3.Q5_0.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [athena-v3.Q5_K_S.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [athena-v3.Q5_K_M.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [athena-v3.Q6_K.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [athena-v3.Q8_0.gguf](https://huggingface.co/TheBloke/Athena-v3-GGUF/blob/main/athena-v3.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Athena-v3-GGUF and below it, a specific filename to download, such as: athena-v3.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Athena-v3-GGUF athena-v3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Athena-v3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Athena-v3-GGUF athena-v3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m athena-v3.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Athena-v3-GGUF", model_file="athena-v3.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: IkariDev and Undi95's Athena V3

Experimental Athena v3 model. Use Alpaca format.
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Athena-V3.
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-GGUF) -->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-GPTQ) -->
<!-- [exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2) -->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-AWQ) -->
[fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3)
[GGUF - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v2-GGUF) -->
## Ratings:
Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
No ratings..
<!-- description end -->
<!-- description start -->
## Models and loras used
- Athena-v2
- migtissera/Synthia-13B-v1.2
- The-Face-Of-Goonery/Huginn-13b-FP16
- PygmalionAI/pygmalion-2-13b
- The-Face-Of-Goonery/LegerDemain-FP16
- chargoddard/storytime-13b
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- zattio770/120-Days-of-LORA-v2-13B
```
Loras: [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT(0.65) + zattio770/120-Days-of-LORA-v2-13B(0.35)](0.3) to the final model
+ [Athena-v2(0.70) + migtissera/Synthia-13B-v1.2(0.3)](0.5)
+ [The-Face-Of-Goonery/Huginn-13b-FP16(0.85) + PygmalionAI/pygmalion-2-13b](0.15)](0.40)
+ [The-Face-Of-Goonery/LegerDemain-FP16(0.3) chargoddard/storytime-13b(0.7)](0.10)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)
To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator.
<!-- original-model-card end -->
|
vivo-ai/BlueLM-7B-Chat-32K | vivo-ai | 2024-03-27T06:00:14Z | 420 | 37 | transformers | [
"transformers",
"pytorch",
"BlueLM",
"text-generation",
"custom_code",
"zh",
"en",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-10-17T06:23:25Z | ---
license: other
language:
- zh
- en
---
# BlueLM
<p align="center">
🖥 <a href="https://github.com/vivo-ai-lab/BlueLM" target="_blank">github</a> • 📜 <a href="https://huggingface.co/vivo-ai/BlueLM-7B-Chat-32K/blob/main/MODEL_LICENSE" target="_blank">LICENSE</a> • 🎯 <a href="https://developers.vivo.com/product/ai/bluelm" target="_blank">vivo Developers</a> • 🗨 <a href="https://github.com/vivo-ai-lab/BlueLM/blob/main/resources/wechat.png" target="_blank">WeChat</a>
</p>
## 模型介绍/Introduction
BlueLM 是由 vivo AI 全球研究院自主研发的大规模预训练语言模型,本次发布包含 7B 基础模型和 7B 对话模型,同时我们开源了支持 **32K** 的长文本基础模型和对话模型。
- **更大量的优质数据**:高质量语料库进行训练,规模达到了 **2.6 万亿** 的 token 数,该语料库包含中文、英文以及少量日韩数据。
- **更优的效果**:其中 BlueLM-7B-Chat 在 **C-Eval** 和 **CMMLU** 上均取得领先结果,对比同尺寸开源模型中具有较强的竞争力。
- **长文本支持**:BlueLM-7B-Base-32K 和 BlueLM-7B-Chat-32K 均支持 **32K** 长文本,在保持基础能力相当情况下,能够支持更长上下文理解。
- **协议说明**:BlueLM 系列欢迎开发者进行学术研究和商业应用。
BlueLM is a large-scale open-source language model independently developed by the vivo AI Lab. This release includes 2K and 32K context length versions for both Base and Chat models.
- **High-quality Data**: BlueLM is trained on a high-quality data with 2.6 trillion tokens. Our train corpus mainly consists of Chinese and English data, with a small amount of Japanese and Korean data.
- **Stronger Performance**: BlueLM-7B-Chat achieves a strong competitive performance in C-Eval and CMMLU benchmarks of the same size.
- **Longer Context**: We have extended the context length of both BlueLM-7B-Base-32K and BlueLM-7B-Chat-32K models from 2K to 32K. The models can support longer context understanding while maintaining the same basic capabilities.
- **Model License**: BlueLM weights are open for academic research and commercial use.
本次发布基座模型下载链接见:
The release versions and hugging face download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B-2k | [BlueLM-7B-Base](https://huggingface.co/vivo-ai/BlueLM-7B-Base) | [BlueLM-7B-Chat](https://huggingface.co/vivo-ai/BlueLM-7B-Chat) | [BlueLM-7B-Chat-4bits](https://huggingface.co/vivo-ai/BlueLM-7B-Chat-4bits) |
| 7B-32K | [BlueLM-7B-Base-32K](https://huggingface.co/vivo-ai/BlueLM-7B-Base-32K) | [BlueLM-7B-Chat-32K](https://huggingface.co/vivo-ai/BlueLM-7B-Chat-32K) | [BlueLM-7B-Chat-32K-AWQ](https://huggingface.co/vivo-ai/BlueLM-7B-Chat-32K-AWQ) / [BlueLM-7B-Chat-32K-GPTQ](https://huggingface.co/vivo-ai/BlueLM-7B-Chat-32K-GPTQ) |
## 评测结果/Benchmark Results
我们在 LongBench 评测集上对我们的 BlueLM-7B-Chat-32K 模型进行了测试,具体结果如下表所示:
We tested our BlueLM-7B-Chat-32K on the LongBench dataset and the results are shown in the table below:
| Model | Average | Summary | Single-Doc QA | Multi-Doc QA | Code | Few-shot | Synthetic |
|:----------------------|:-----|:---------|:--------------|:--------------|:------|:---------|:----------|
| BlueLM-7B-Chat-32K | 41.2 | 18.8 | 35.6 | 36.2 | 54.2 | 56.9 | 45.5 |
## 推理部署/Inference and Deployment
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("vivo-ai/BlueLM-7B-Chat-32K", trust_remote_code=True, use_fast=False)
>>> model = AutoModelForCausalLM.from_pretrained("vivo-ai/BlueLM-7B-Chat-32K", device_map="cuda:0", torch_dtype=torch.bfloat16, trust_remote_code=True)
>>> model = model.eval()
>>> inputs = tokenizer("[|Human|]:三国演义的作者是谁?[|AI|]:", return_tensors="pt")
>>> inputs = inputs.to("cuda:0")
>>> pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
>>> print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
三国演义的作者是谁? 《三国演义》的作者是明代小说家罗贯中。
```
更多使用说明,请参考我们的 [Github 仓库](https://github.com/vivo-ai-lab/BlueLM)。
For more instructions, please refer to our [Github Repo](https://github.com/vivo-ai-lab/BlueLM).
## 协议/License
社区使用代码依照 [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) 协议开源,且使用 BlueLM 模型权重需要遵循 [vivo_BlueLM模型许可协议](https://huggingface.co/vivo-ai/BlueLM-7B-Chat-32K/blob/main/MODEL_LICENSE)。
Our code is licensed under the [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) and [Community License for BlueLM Model](https://huggingface.co/vivo-ai/BlueLM-7B-Chat-32K/blob/main/MODEL_LICENSE). |
maddes8cht/mosaicml-mpt-7b-8k-chat-gguf | maddes8cht | 2023-11-01T15:36:47Z | 420 | 1 | null | [
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2023-10-27T21:43:55Z | ---
license: cc-by-nc-sa-4.0
datasets:
- camel-ai/code
- ehartford/wizard_vicuna_70k_unfiltered
- anon8231489123/ShareGPT_Vicuna_unfiltered
- teknium1/GPTeacher/roleplay-instruct-v2-final
- teknium1/GPTeacher/codegen-isntruct
- timdettmers/openassistant-guanaco
- camel-ai/math
- project-baize/baize-chatbot/medical_chat_data
- project-baize/baize-chatbot/quora_chat_data
- project-baize/baize-chatbot/stackoverflow_chat_data
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/ai_society
- jondurbin/airoboros-gpt4-1.2
- LongConversations
- camel-ai/physics
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# mpt-7b-8k-chat - GGUF
- Model creator: [mosaicml](https://huggingface.co/mosaicml)
- Original model: [mpt-7b-8k-chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# MPT-7B-Chat-8k
MPT-7B-Chat-8k is a chatbot-like model for dialogue generation.
It was built by finetuning [MPT-7B-8k](https://huggingface.co/mosaicml/mpt-7b-8k) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
This is the same dataset that [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat) was trained on.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
July 18, 2023
## Model License
_CC-By-NC-SA-4.0_ (non-commercial use only)
## Documentation
* [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-chat-8k',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-chat-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-chat-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-7B-chat tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional ChatML tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| Airoboros/GPT4-1.2 | 26.4M | 1.71% |
| Baize | 55.0M | 3.57% |
| Camel | 301M | 19.54% |
| GPTeacher | 7.56M | 0.49% |
| Guanaco | 15.6M | 1.02% |
| LongCoversations | 18.4M | 1.19% |
| ShareGPT | 821M | 53.24% |
| WizardLM | 297M | 19.23% |
"LongConversations" is a GPT3.5/4-generated dataset, details of which will be released at a later date.
### Training Configuration
This model was trained on 192 H100s for about 48 minutes using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Chat-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Chat-8k was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by the MosaicML NLP team
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
kreabs/NeuralDaredevil-7B_finetuned_dolly_1600 | kreabs | 2024-01-18T22:40:03Z | 420 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-18T22:33:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenGVLab/InternVL-Chat-V1-2-Plus | OpenGVLab | 2024-05-29T10:59:59Z | 420 | 31 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"internvl_chat",
"feature-extraction",
"visual-question-answering",
"custom_code",
"dataset:laion/laion2B-en",
"dataset:laion/laion-coco",
"dataset:laion/laion2B-multi",
"dataset:kakaobrain/coyo-700m",
"dataset:conceptual_captions",
"dataset:wanng/wukong100m",
"arxiv:2312.14238",
"arxiv:2404.16821",
"license:mit",
"region:us"
]
| visual-question-answering | 2024-02-21T09:54:39Z | ---
license: mit
datasets:
- laion/laion2B-en
- laion/laion-coco
- laion/laion2B-multi
- kakaobrain/coyo-700m
- conceptual_captions
- wanng/wukong100m
pipeline_tag: visual-question-answering
---
# Model Card for InternVL-Chat-V1-2-Plus
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/X8AXMkOlKeUpNcoJIXKna.webp" alt="Image Description" width="300" height="300">
</p>
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/)
[\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
InternVL-Chat-V1-2-Plus uses the same model architecture as [InternVL-Chat-V1-2](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2), but the difference lies in the SFT dataset. InternVL-Chat-V1-2 only utilizes an SFT dataset with 1.2M samples, while **our plus version employs an SFT dataset with 12M samples**.
<p align="center">
<img width="600" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/GIEKCvNc1Y5iMQqLv645p.png">
</p>
## Model Details
- **Model Type:** multimodal large language model (MLLM)
- **Model Stats:**
- Architecture: [InternViT-6B-448px-V1-2](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) + MLP + [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
- Image size: 448 x 448 (256 tokens)
- Params: 40B
- **Training Strategy:**
- Pretraining Stage
- Learnable Component: MLP
- Data: Trained on 8192x4800=39.3M samples, including COYO, LAION, CC12M, CC3M, SBU, Wukong, GRIT, Objects365, OpenImages, and OCR data. In this stage, we first load the pre-trained weights of [InternViT-6B-448px-V1-0](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) and connect it to Nous-Hermes-2-Yi-34B. After pre-training, the extracted ViT is published as [InternViT-6B-448px-V1-2](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2). Moreover, in order to reduce the number of visual tokens, we use a pixel shuffle to reduce 1024 tokens to 256 tokens.
- Supervised Finetuning Stage
- Learnable Component: ViT + MLP + LLM
- Data: 12 million SFT samples.
## Released Models
| Model | Vision Foundation Model | Release Date |Note |
| :---------------------------------------------------------:|:--------------------------------------------------------------------------: |:----------------------:| :---------------------------------- |
| InternVL-Chat-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) |2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
| InternVL-Chat-V1-2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.21 | more SFT data and stronger |
| InternVL-Chat-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
| InternVL-Chat-V1-1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
## Performance
\* Proprietary Model † Training Set Observed
| name | image size | MMMU<br>(val) | MMMU<br>(test) | MathVista<br>(testmini) | MMB<br>(test) | MMB−CN<br>(test) | MMVP | MME | ScienceQA<br>(image) | POPE | TextVQA<br>(val) | SEEDv1<br>(image) | VizWiz<br>(test) | GQA<br>(test) |
| ----------------------- | ---------- | ------------- | -------------- | ----------------------- | ------------- | ---------------- | ---- | -------- | -------------------- | ---- | ---------------- | ----------------- | ---------------- | ------------- |
| GPT-4V\* | unknown | 56.8 | 55.7 | 49.9 | 77.0 | 74.4 | 38.7 | 1409/517 | - | - | 78.0 | 71.6 | - | - |
| Gemini Ultra\* | unknown | 59.4 | - | 53.0 | - | - | - | - | - | - | 82.3 | - | - | - |
| Gemini Pro\* | unknown | 47.9 | - | 45.2 | 73.6 | 74.3 | 40.7 | 1497/437 | - | - | 74.6 | 70.7 | - | - |
| Qwen−VL−Plus\* | unknown | 45.2 | 40.8 | 43.3 | 67.0 | 70.7 | - | 1681/502 | - | - | 78.9 | 65.7 | - | - |
| Qwen−VL−Max\* | unknown | 51.4 | 46.8 | 51.0 | 77.6 | 75.7 | - | - | - | - | 79.5 | - | - | - |
| | | | | | | | | | | | | | | |
| LLaVA−NEXT−34B | 672x672 | 51.1 | 44.7 | 46.5 | 79.3 | 79.0 | - | 1631/397 | 81.8 | 87.7 | 69.5 | 75.9 | 63.8 | 67.1† |
| InternVL−Chat−V1-2 | 448x448 | 51.6 | 46.2 | 47.7 | 82.2 | 81.2 | 56.7 | 1687/489 | 83.3 | 88.0 | 72.5 | 75.6 | 60.0 | 64.0† |
| InternVL−Chat−V1-2−Plus | 448x448 | 50.3 | 45.6 | 59.9 | 83.8 | 82.0 | 58.7 | 1625/553 | 98.1† | 88.7 | 74.1† | 76.4 | - | 66.9† |
- MMBench results are collected from the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard).
- Update (2024-04-21): We have fixed a bug in the evaluation code, and the TextVQA results have been corrected.
## Model Usage
We provide an example code to run InternVL-Chat-V1-2-Plus using `transformers`.
You also can use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer
path = "OpenGVLab/InternVL-Chat-V1-2-Plus"
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
# Otherwise, you need to set device_map='auto' to use multiple GPUs for inference.
# model = AutoModel.from_pretrained(
# path,
# torch_dtype=torch.bfloat16,
# low_cpu_mem_usage=True,
# trust_remote_code=True,
# device_map='auto').eval()
tokenizer = AutoTokenizer.from_pretrained(path)
image = Image.open('./examples/image2.jpg').convert('RGB')
image = image.resize((448, 448))
image_processor = CLIPImageProcessor.from_pretrained(path)
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
generation_config = dict(
num_beams=1,
max_new_tokens=512,
do_sample=False,
)
# single-round conversation
question = "请详细描述图片"
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(question, response)
# multi-round conversation
question = "请详细描述图片"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(question, response)
question = "请根据图片写一首诗"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(question, response)
```
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
```
## License
This project is released under the MIT license. Parts of this project contain code and models (e.g., LLaMA2) from other sources, which are subject to their respective licenses.
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Acknowledgement
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work! |
GreatCaptainNemo/ProLLaMA | GreatCaptainNemo | 2024-06-08T06:37:11Z | 420 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2402.16445",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-24T07:48:43Z | ---
license: apache-2.0
---
# ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
[Paper on arxiv](https://arxiv.org/abs/2402.16445) for more information
[Github](https://github.com/Lyu6PosHao/ProLLaMA) for more information
ProLLaMA is based on Llama-2-7b, so please follow the license of Llama2.
# Input Format:
The instructions which you input to the model should follow the following format:
```text
[Generate by superfamily] Superfamily=<xxx>
or
[Determine superfamily] Seq=<yyy>
```
Here are some examples of the input:
```text
[Generate by superfamily] Superfamily=<Ankyrin repeat-containing domain superfamily>
```
```
#You can also specify the first few amino acids of the protein sequence:
[Generate by superfamily] Superfamily=<Ankyrin repeat-containing domain superfamily> Seq=<MKRVL
```
```
[Determine superfamily] Seq=<MAPGGMPREFPSFVRTLPEADLGYPALRGWVLQGERGCVLYWEAVTEVALPEHCHAECWGVVVDGRMELMVDGYTRVYTRGDLYVVPPQARHRARVFPGFRGVEHLSDPDLLPVRKR>
```
**See [this](https://github.com/Lyu6PosHao/ProLLaMA/blob/main/superfamilies.txt) on all the optional superfamilies.**
# Quick usage:
```bash
# you can replace the model_path with your local path
CUDA_VISIBLE_DEVICES=0 python main.py --model "GreatCaptainNemo/ProLLaMA" --interactive
# main.py is as follows 👇:
```
```python
import argparse
import json, os
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
from transformers import GenerationConfig
from tqdm import tqdm
generation_config = GenerationConfig(
temperature=0.2,
top_k=40,
top_p=0.9,
do_sample=True,
num_beams=1,
repetition_penalty=1.2,
max_new_tokens=400
)
parser = argparse.ArgumentParser()
parser.add_argument('--model', default=None, type=str,help="The local path of the model. If None, the model will be downloaded from HuggingFace")
parser.add_argument('--interactive', action='store_true',help="If True, you can input instructions interactively. If False, the input instructions should be in the input_file.")
parser.add_argument('--input_file', default=None, help="You can put all your input instructions in this file (one instruction per line).")
parser.add_argument('--output_file', default=None, help="All the outputs will be saved in this file.")
args = parser.parse_args()
if __name__ == '__main__':
if args.interactive and args.input_file:
raise ValueError("interactive is True, but input_file is not None.")
if (not args.interactive) and (args.input_file is None):
raise ValueError("interactive is False, but input_file is None.")
if args.input_file and (args.output_file is None):
raise ValueError("input_file is not None, but output_file is None.")
load_type = torch.bfloat16
if torch.cuda.is_available():
device = torch.device(0)
else:
raise ValueError("No GPU available.")
model = LlamaForCausalLM.from_pretrained(
args.model,
torch_dtype=load_type,
low_cpu_mem_usage=True,
device_map='auto',
quantization_config=None
)
tokenizer = LlamaTokenizer.from_pretrained(args.model)
model.eval()
with torch.no_grad():
if args.interactive:
while True:
raw_input_text = input("Input:")
if len(raw_input_text.strip())==0:
break
input_text = raw_input_text
input_text = tokenizer(input_text,return_tensors="pt")
generation_output = model.generate(
input_ids = input_text["input_ids"].to(device),
attention_mask = input_text['attention_mask'].to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
generation_config = generation_config,
output_attentions=False
)
s = generation_output[0]
output = tokenizer.decode(s,skip_special_tokens=True)
print("Output:",output)
print("\n")
else:
outputs=[]
with open(args.input_file, 'r') as f:
examples =f.read().splitlines()
print("Start generating...")
for index, example in tqdm(enumerate(examples),total=len(examples)):
input_text = tokenizer(example,return_tensors="pt") #add_special_tokens=False ?
generation_output = model.generate(
input_ids = input_text["input_ids"].to(device),
attention_mask = input_text['attention_mask'].to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
generation_config = generation_config
)
s = generation_output[0]
output = tokenizer.decode(s,skip_special_tokens=True)
outputs.append(output)
with open(args.output_file,'w') as f:
f.write("\n".join(outputs))
print("All the outputs have been saved in",args.output_file)
```
# Citation:
```
@article{lv2024prollama,
title={ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing},
author={Lv, Liuzhenghao and Lin, Zongying and Li, Hao and Liu, Yuyang and Cui, Jiaxi and Chen, Calvin Yu-Chian and Yuan, Li and Tian, Yonghong},
journal={arXiv preprint arXiv:2402.16445},
year={2024}
}
``` |
lightblue/Karasu-Mixtral-8x22B-v0.1 | lightblue | 2024-04-11T15:44:16Z | 420 | 60 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:openchat/openchat_sharegpt4_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-11T04:24:25Z | ---
library_name: transformers
license: apache-2.0
datasets:
- openchat/openchat_sharegpt4_dataset
---
# Model overview
<p align="center">
<img width=400 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/HFnfguV4q5x7eIW09gcJD.png" alt="What happens when you type in 'Mixtral Instruct' into the DALL•E 3 XL v2 space"/>
</p>
This is a finetune of the newly released [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) base model.
As the base model has not explicitly been trained to chat, we trained this model on a multilingual chat dataset so that the LLM community can use this model for conversations.
The accuracy of the model is surprisingly high, and has a decently fast inference speed (roughly 40 tokens/s single batch on our tests), so we believe this will be useful to the community.
# How to use
We have tested (and thus recommend) running this model on vLLM. We recommend running it from the vLLM openAI server, using the following command:
```bash
pip install vllm
python -m vllm.entrypoints.openai.api_server --model lightblue/Karasu-Mixtral-8x22B-v0.1 --tensor-parallel-size 4 --gpu-memory-utilization 0.95 --max-model-len 1024
```
which is how we ran it on a 4 x A100 (80GB) machine.
You can then call this model from Python installing the openai package:
```bash
pip install openai
```
and calling the model like so:
```python
from openai import OpenAI
vllm_client = OpenAI(
api_key="EMPTY",
base_url="http://localhost:8000/v1",
)
prompt = "Tell me three cool facts about fish to amaze my 4 year old."
response = vllm_client.chat.completions.create(
messages=[
{"role": "user", "content": prompt},
],
model="/workspace/llm_training/axolotl/mixtral_8x22B_training/merged_model_multiling",
temperature=0,
max_tokens=900,
)
print("### Prompt")
print(prompt)
print()
print("### Response")
print(response.choices[0].message.content)
print()
# ### Prompt
# Tell me three cool facts about fish to amaze my 4 year old.
# ### Response
# 1. Some fish can change their color! Just like a chameleon, some fish can change their color to blend in with their surroundings or to communicate with other fish. For example, the flounder can change its color to match the ocean floor, making it harder for predators to spot them.
# 2. Fish can sleep with their eyes open! Unlike humans, fish don't have eyelids, so they can't close their eyes to sleep. Instead, they rest in a special way, like floating near the surface or hiding in a safe spot, while their body slows down and they take a nap.
# 3. Fish can talk to each other! Well, not exactly like humans, but they do make sounds and communicate with each other using special organs called swim bladders. They can make clicking, grunting, or even singing sounds to find mates, warn others about danger, or mark their territory.
```
We will be uploading a 4bit AWQ model soon to make it easier to run this model on other machines (watch this space!).
# Inference examples
From qualitative testing, the model seems pretty smart, especially in English, and has very good recall of facts. It can still get confused with some logical questions, but has also passed a lot of the logical questions I have thrown at it that other open source LLMs often fail.
<details>
<summary>Creative prompts</summary>
```text
### Prompt
Write a hilarious 200 word story about Chimpanzees at the zoo
### Response
Once upon a time in the bustling metropolis of Chimp City, there lived a group of mischievous chimpanzees who called the local zoo their home. These primates were known for their witty antics and their uncanny ability to make even the most serious of humans chuckle.
One sunny afternoon, the zookeeper, a stern fellow named Mr. Grumpkins, was making his rounds when he noticed something peculiar. The chimpanzees were huddled together, whispering and giggling amongst themselves. Curious, Mr. Grumpkins approached the enclosure, only to be met with a barrage of banana peels and monkey business.
It turned out that the chimps had been secretly practicing their stand-up comedy routines, and they were ready to debut their act to the unsuspecting zoo visitors. The first chimp, named Chuckles, took the stage and began his routine. "Why did the banana go to the doctor? Because it wasn't peeling well!" The crowd erupted in laughter, and Mr. Grumpkins couldn't help but crack a smile.
Next up was Giggles, who delivered a knock-knock joke that had the audience in stitches. "Knock, knock!" "Who's there?" "Banana." "Banana who?" "Banana split! Get it? Because I'm a chimp and I love bananas!" The crowd roared with laughter, and even Mr. Grumpkins found himself chuckling.
As the show continued, the chimps' comedic talents only grew, with each joke more hilarious than the last. The zoo visitors were thoroughly entertained, and even Mr. Grumpkins had to admit that the chimps had a knack for making people laugh.
In the end, the chimpanzees' stand-up comedy show became a hit, with visitors flocking to the zoo just to catch a glimpse of the hilarious primates. Mr. Grumpkins, now a reformed man, even joined in on the fun, sharing his own jokes with the chimps and the crowd. And so, the legend of the Chimp City Comedy Club was born, bringing laughter and joy to all who visited the zoo.
```
```text
### Prompt
Write 10 jokes that I can use at my boss' retirement party. Nothing too risque as that will get me fired.
### Response
1. As we bid farewell to our dear boss, let's remember that retirement is like a never-ending vacation, except you don't have to worry about your boss calling you in the middle of it!
2. Our boss is retiring, and we're all wondering what he'll do with all that free time. I hear he's planning to take up a new hobby: managing his own retirement!
3. In honor of our boss's retirement, we've decided to give him a special gift: a lifetime supply of coffee, so he can finally get some rest!
4. Our boss has been a great leader, always guiding us through the ups and downs of work life. Now, he's retiring to focus on the ups and downs of his golf game!
5. As our boss embarks on his retirement journey, we'd like to remind him that he's not just leaving a job, he's leaving a legacy of hard work, dedication, and a whole lot of paperwork!
6. Our boss is retiring, and we're all wondering what he'll do with his time. I hear he's planning to take up a new hobby: managing his own retirement!
7. In honor of our boss's retirement, we've decided to give him a special gift: a lifetime supply of coffee, so he can finally get some rest!
8. Our boss has been a great leader, always guiding us through the ups and downs of work life. Now, he's retiring to focus on the ups and downs of his golf game!
9. As our boss embarks on his retirement journey, we'd like to remind him that he's not just leaving a job, he's leaving a legacy of hard work, dedication, and a whole lot of paperwork!
10. Our boss is retiring, and we're all wondering what he'll do with his time. I hear he's planning to take up a new hobby: managing his own retirement!
```
</details>
<details>
<summary>Factual prompts</summary>
```text
### Prompt
Tell me about Strathaven, Scotland.
### Response
Strathaven is a charming town located in South Lanarkshire, Scotland. It is situated approximately 20 miles (32 kilometers) south of Glasgow and 15 miles (24 kilometers) east of East Kilbride. The town has a rich history and is known for its picturesque setting, historic buildings, and annual events.
Strathaven's origins date back to the 12th century when it was established as a market town. The town's name comes from the Gaelic word "Strath" meaning "valley" and "Aven" meaning "river," referring to the River Avon that runs through the area.
One of the most notable landmarks in Strathaven is the Strathaven Castle, also known as Avondale Castle. The castle dates back to the 14th century and was once an important stronghold for the Barons of Avondale. Although it is now in ruins, it remains a popular attraction for visitors who can explore its remains and learn about its history.
The town center features several historic buildings, including the Town Mill, which dates back to the 17th century and has been restored as a museum showcasing local history and crafts. The John Hastie Museum Trust also operates a small museum in Strathaven dedicated to preserving local heritage.
Strathaven hosts several annual events that draw visitors from near and far. One of the most famous events is the Strathaven Balloon Festival, which takes place every August and features hot air balloons from around the world taking off from John Hastie Park. Other events include the Strathaven Gala Day in June, which includes a parade, live music, and various stalls selling local produce and crafts.
The surrounding countryside offers opportunities for outdoor activities such as walking, cycling, fishing, and golfing at nearby courses like Strathaven Golf Club or Dalserf Golf Club. The nearby Clyde Valley provides scenic routes for exploring by car or bike with numerous fruit farms offering seasonal produce like strawberries and raspberries during summer months.
```
```text
### Prompt
How many people live in Gweru, Zimbabwe?
### Response
As of my last update in 2021, the population of Gweru, Zimbabwe was estimated to be around 148,000. However, please note that population figures can change over time due to various factors such as migration, birth rates, and death rates. For the most accurate and up-to-date information on the population of Gweru, it is recommended to consult official sources or recent census data from Zimbabwe's government or statistical agencies.
```
```text
### Prompt
Give me 5 great bars/breweries to buy craft beer in Auckland
### Response
As an AI language model, I cannot provide real-time information or personal recommendations. However, as of my knowledge cutoff date in September 2021, here are five popular craft beer bars and breweries in Auckland that you might want to check out:
1. Brothers Beer - City Works Depot: Located in the trendy City Works Depot area, Brothers Beer offers a wide selection of craft beers on tap and in bottles. They also have a bottle shop where you can purchase beers to take home.
2. Galbraith's Alehouse: A classic Auckland institution, Galbraith's is known for its traditional English-style ales and ciders. They also have a great food menu to accompany your beer experience.
3. Urbanaut Brewing Co.: This brewery has a taproom where you can sample their range of craft beers, including IPAs, lagers, and stouts. They also offer growler fills for takeaway purchases.
4. The Lumsden Freehouse: With over 100 different beers on tap and in bottles, The Lumsden Freehouse is a must-visit for any craft beer enthusiast. They also have a great selection of local and international wines and spirits.
5. Hallertau Brewery & Restaurant: Located in Riverhead, just outside of Auckland city center, Hallertau offers a range of their own craft beers alongside delicious food options in their restaurant setting.
Please note that this information may not be up-to-date or accurate as it was last updated in September 2021
```
```text
### Prompt
What is the boiling point of Cadmium?
### Response
The boiling point of cadmium is approximately 765°C (1,409°F) at standard atmospheric pressure. Cadmium is a soft, bluish-white metal that is highly toxic and has been used in various industrial applications, such as in batteries, coatings, and alloys. However, due to its toxicity and environmental concerns, its use has been significantly reduced or banned in many countries.
```
</details>
<details>
<summary>Logical questions</summary>
```text
### Prompt
If I put a plate on a banana in the kitchen then carry the plate into the living room, where is the banana?
### Response
The banana is in the living room, as it is on the plate that you carried from the kitchen to the living room.
```
FAIL! The banana would still be in the kitchen, as I put the plate on the banana.
```text
### Prompt
What location is equidistant to Australia, the USA, China, Uganda, and Jamaica?
### Response
To find a location that is equidistant to Australia, the USA, China, Uganda, and Jamaica, we can use a tool called the "Great Circle Mapper" (GCM) or similar online tools. The GCM calculates the great circle distance between two points on Earth's surface.
After using the GCM to calculate the distances between each pair of countries and finding their midpoints, I found that there is no single location that is equidistant to all five countries. However, there are some areas in the Pacific Ocean that come close to being equidistant from these countries.
One such area is around 10°N latitude and 160°W longitude. This location is approximately equidistant from Australia (roughly 5,500 miles), China (roughly 5,800 miles), Uganda (roughly 6,200 miles), and Jamaica (roughly 6,300 miles). However, it's not exactly equidistant from the USA due to its vast size; it's closer to Hawaii than it is to the continental US.
Keep in mind that these distances are approximate and may vary slightly depending on which specific cities or regions within each country you choose as reference points.
```
Sort of a failure, I'd prefer to say "the center of the Earth".
</details>
<details>
<summary>Multi-turn</summary>
```text
### Prompt
Come up with a brand new idea for a sci-fi film. Don't just use old ideas, I want something really original! PLease don't make it about AI.
### Response
Title: "The Quantum Leap"
Synopsis:
In the year 2150, humanity has colonized Mars and established a thriving society. The Martian colonists have developed a unique form of transportation called the "Quantum Leap," which allows them to travel instantaneously between planets by harnessing the power of quantum entanglement. However, this technology comes with a catch: every time someone uses the Quantum Leap, they experience a brief but intense moment of déjà vu, as if they've lived through that exact moment before.
The story follows a young Martian scientist named Astra (played by an up-and-coming actress), who discovers that these moments of déjà vu are not just random occurrences but glimpses into alternate realities where different choices were made. As she delves deeper into her research, Astra uncovers a sinister plot by a powerful corporation to exploit the Quantum Leap technology for their own gain, threatening the very fabric of reality itself.
Astra must team up with an unlikely group of allies, including a rogue terraforming engineer named Kai (played by an established actor known for his charisma) and an enigmatic AI entity named Echo (voiced by a renowned voice actor), to stop the corporation and save not only her own reality but countless others as well. Along the way, they encounter strange and wondrous alternate realities, each with its own unique challenges and dangers.
As they race against time to prevent the collapse of multiple realities, Astra and her team must confront their own past choices and grapple with the consequences of their actions in this mind-bending sci-fi adventure that explores themes of fate, free will, and the power of human connection across infinite possibilities.
### Followup prompt
Any ideas for who we cast for the role of Astra?
### Followup response
For the role of Astra, I would suggest casting a talented and versatile actress who can bring depth, intelligence, and emotional range to the character. Some potential choices could include:
1. Zendaya: Known for her roles in "Euphoria" and "Spider-Man: Homecoming," Zendaya has proven her ability to portray complex characters with nuance and authenticity. Her star power would also help draw audiences to the film.
2. Lupita Nyong'o: An Academy Award-winning actress known for her performances in "12 Years a Slave" and "Black Panther," Lupita Nyong'o has demonstrated her ability to tackle challenging roles with grace and emotional depth. Her talent would be a great fit for the character of Astra.
3. Florence Pugh: With standout performances in films like "Midsommar" and "Little Women," Florence Pugh has quickly
```
The idea is a bit original. but the casting is 2/3rds Dune actresses.
</details>
<details>
<summary>Preference questions</summary>
### Preference questions
```text
### Prompt
Who is the coolest Beatle?
### Response
The Beatles are widely considered one of the most influential and popular bands in the history of music. Each member of the band, John Lennon, Paul McCartney, George Harrison, and Ringo Starr, has their own unique personality and style that has contributed to their enduring popularity.
Determining who is the "coolest" Beatle is subjective and depends on personal preferences. However, each member has their own distinct qualities that have made them stand out:
1. John Lennon: Known for his wit, intelligence, and rebellious nature, Lennon was a key songwriter for the band and later became an iconic figure in the anti-war movement during the 1960s and 1970s. His solo career produced hits like "Imagine" and "Give Peace a Chance."
2. Paul McCartney: Often regarded as the most commercially successful Beatle after the band's breakup, McCartney is known for his melodic songwriting skills and versatility as a musician. He has had a prolific solo career with hits like "Band on the Run" and "Live and Let Die."
3. George Harrison: The lead guitarist of the band, Harrison was known for his spirituality and interest in Eastern philosophy. He introduced Indian music elements into The Beatles' sound with songs like "Within You Without You" and later had a successful solo career with hits like "My Sweet Lord" and "What Is Life."
4. Ringo Starr: As the drummer for The Beatles, Starr brought a sense of humor to the group with his charming personality. He also sang lead vocals on several songs such as "Yellow Submarine" and "With a Little Help from My Friends." After The Beatles disbanded, he pursued a solo career with hits like "It Don't Come Easy" and continued to work as an actor in films such as Caveman (1981).
Ultimately, it's up to individual fans to decide who they think is the coolest Beatle based on their personal preferences in terms of music style, personality traits
```
</details>
<details>
<summary>Multilingual questions</summary>
Japanese
```
### Prompt
就活の面接がそろそろ始まるので、緊張している。どうすればいい?
### Response
面接に緊張しているのは当然ですが、それを適切にコントロールすることが大切です。以下の手順で緊張を抑え、面接に自信を持って出場できるようになります。
1. **準備をしよう**: 面接に向けて、会社や職務についての情報を収集し、自分のスキルや経験を整理しましょう。これにより、面接で質問に答える際に自信を持って話せるようになります。
2. **練習面接**: 友人や家族と練習面接を行いましょう。これにより、実際の面接での緊張を抑えることができます。
3. **深呼吸**: 面接前に、深呼吸をして緊張を解消しましょう。これにより、身体の緊張を解消し、心拍数を落とすことができます。
4. **ポジティブな思考**: 面接前に、自分の強みや成功体験を思い出し、自信を持って面接に向かいましょう。
5. **適切な服装**: 面接に向けて、適切な服装を選びましょう。これにより、自分自身の自信が高まり、緊張を抑えることができます。
6. **早めに到着**: 面接の場所に早めに到着し、緊張を解消しましょう。これにより、安心して面接に向かうことができます。
7. **面接中の緊張を抑える**: 面接中に緊張が高まる場合は、深呼吸をして緊張を解消しましょう。また、面接官に対して質問をすることで、自分の緊張を抑えることができます。
8. **自分を褒める**: 面接が終わった後は、自分自身を褒めましょう。面接に向けての準備や面接中のパフォーマンスについて、自分自身を褒めることで、次の面接に向けて自信を持って出場できるようになります。
面接に向けての準備と練習、緊張を抑えるための技術を身に着けることで、面接に自信を持って出場できるようになります。緊張は自然なものですが、適切にコントロールすることが大切です。
```
```text
### Prompt
東京駅から池袋駅まで行きたいです。一番早い路線は?更に、何分かかる?
### Response
東京駅から池袋駅までの一番早い路線は、JR山手線です。この路線を利用すると、通常の時間帯では約15分かかります。ただし、時間帯や列車の種類によっては異なる場合がありますので、実際に行く前に電車時刻表を確認してください。
```
False, the fastest line is Marunochi.
```text
### Prompt
仕方が。。。
### Response
「仕方がない」という表現は、ある状況や事態に対して、何もできない、あるいは何も変えられないという気持ちを表します。この表現は、不満や悲しみ、あるいは無力感を感じる時に使われます。例えば、自分の好きな人が別れを告げた時、あるいは予定通りに進まない不幸な出来事が起こった時など、自分の意志に反して起こることに対して使われます。この表現は、ある状況に対して、何もできないという無力感を表すものであり、その状況を受け入れることを意味します。
```
```text
### Prompt
犬が歩けば。。。
### Response
犬が歩けば、その動きは人間の心を満たします。
```
French
```text
### Prompt
Qui est le meilleur footballeur du monde ?
### Response
Déterminer le "meilleur footballeur du monde" est une question subjective et dépend de nombreux facteurs, tels que les performances récentes, les réalisations individuelles et collectives, ainsi que les préférences personnelles. Cependant, il y a quelques noms qui reviennent souvent dans les discussions sur les meilleurs joueurs de football au monde.
Au moment de ma dernière mise à jour en avril 2023, les joueurs suivants sont souvent considérés comme faisant partie des meilleurs du monde :
1. **Lionel Messi** - L'Argentin est considéré par beaucoup comme le meilleur joueur de tous les temps. Il a remporté de nombreux trophées avec le FC Barcelone et l'Argentine, y compris la Coupe du Monde 2022.
2. **Cristiano Ronaldo** - Le Portugais est connu pour sa compétitivité et sa capacité à marquer des buts. Il a remporté des trophées avec Manchester United, le Real Madrid, la Juventus et le Portugal.
3. **Kylian Mbappé** - Le jeune Français est considéré comme l'un des joueurs les plus prometteurs de sa génération. Il a déjà remporté la Coupe du Monde avec la France et a été un élément clé du succès du Paris Saint-Germain.
4. **Kevin De Bruyne** - Le milieu de terrain belge est connu pour sa vision du jeu et sa capacité à créer des occasions pour ses coéquipiers. Il a été un élément clé du succès de Manchester City.
5. **Virgil van Dijk** - Le défenseur néerlandais est considéré comme l'un des meilleurs défenseurs du monde. Il a joué un rôle crucial dans le succès de Liverpool ces dernières années.
Ces joueurs sont souvent cités comme étant parmi les meilleurs du monde, mais il est important de noter que le football est un sport en constante évolution, et de nouveaux talents émergent constamment.
```
</details>
<br/>
# Training dataset
We trained this model on conversations between human users and GPT-4.
This consists of two datasets:
* 6,206 conversations from the [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset) dataset ([link](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true))
* 3,011 conversations that we created. We wanted to increase the representation of non_english prompts in our training dataset, so we sampled initial prompts from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m), stratifying based on language. We then prompted gpt-4-0125 with these, and used the results as training data.
We plan to release more information on this second dataset soon, as we are using it another dataset.
The complete data used to train this model can be found at [lightblue/gpt4_conversations_multilingual](https://huggingface.co/datasets/lightblue/gpt4_conversations_multilingual)
# Training details
We trained this model using Axolotl's 4bit QLoRA configuration for roughly 100 minutes in a A100 (80GB) x 4 environment on the Azure cloud (Standard_NC96ads_A100_v4).
We used Deepspeed Zero2 to effectively train over 4 GPUs.
We used the following config to train the model:
<details>
<summary>Training config</summary>
```yaml
base_model: mistral-community/Mixtral-8x22B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: lightblue/gpt4_conversations_multilingual
type: sharegpt
conversation: mistral
dataset_prepared_path: ./prepared_dataset_2048-multiling
val_set_size: 0
output_dir: ./qlora-out-2048-multiling
## You can optionally freeze the entire model and unfreeze a subset of parameters
unfrozen_parameters:
# - ^lm_head.weight$
# - ^model.embed_tokens.weight$[:32000]
# - model.layers.2[0-9]+.block_sparse_moe.gate
# - model.layers.2[0-9]+.block_sparse_moe.experts
# - model.layers.3[0-9]+.block_sparse_moe.gate
# - model.layers.3[0-9]+.block_sparse_moe.experts
model_config:
output_router_logits: true
adapter: qlora
lora_model_dir:
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
lora_r: 16
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
#lora_target_modules:
# - gate
# - q_proj
# - k_proj
# - v_proj
# - o_proj
# - w1
# - w2
# - w3
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
use_wandb: true
wandb_project: wandb_project
wandb_entity: wandb_entity
wandb_name: wandb_name
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 0
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 5
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details>
<br/>
# Developers
### Lead developer
Peter Devine - [ptrdvn](https://huggingface.co/ptrdvn)
### Advisor
Shunichi Taniguchi - [shun1taniguchi](https://huggingface.co/shun1taniguchi) |
mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF | mradermacher | 2024-05-19T22:31:08Z | 420 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/FlameMix-DPO-v1",
"dataset:flammenai/Grill-preprod-v1_chatML",
"dataset:flammenai/Grill-preprod-v2_chatML",
"base_model:flammenai/Mahou-1.2a-mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-19T16:22:18Z | ---
base_model: flammenai/Mahou-1.2a-mistral-7B
datasets:
- flammenai/FlameMix-DPO-v1
- flammenai/Grill-preprod-v1_chatML
- flammenai/Grill-preprod-v2_chatML
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/flammenai/Mahou-1.2a-mistral-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-mistral-7B-i1-GGUF/resolve/main/Mahou-1.2a-mistral-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Klevin/Aura-4.0-Test | Klevin | 2024-05-29T08:40:00Z | 420 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-29T08:35:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf | RichardErkhov | 2024-05-30T12:09:09Z | 420 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T08:38:20Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-7b-platypus1k - GGUF
- Model creator: https://huggingface.co/lgaalves/
- Original model: https://huggingface.co/lgaalves/mistral-7b-platypus1k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-7b-platypus1k.Q2_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q2_K.gguf) | Q2_K | 2.53GB |
| [mistral-7b-platypus1k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [mistral-7b-platypus1k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [mistral-7b-platypus1k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral-7b-platypus1k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral-7b-platypus1k.Q3_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral-7b-platypus1k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral-7b-platypus1k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral-7b-platypus1k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [mistral-7b-platypus1k.Q4_0.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral-7b-platypus1k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral-7b-platypus1k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral-7b-platypus1k.Q4_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral-7b-platypus1k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral-7b-platypus1k.Q4_1.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral-7b-platypus1k.Q5_0.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q5_0.gguf) | Q5_0 | 4.65GB |
| [mistral-7b-platypus1k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [mistral-7b-platypus1k.Q5_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral-7b-platypus1k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral-7b-platypus1k.Q5_1.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral-7b-platypus1k.Q6_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q6_K.gguf) | Q6_K | 5.53GB |
| [mistral-7b-platypus1k.Q8_0.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_mistral-7b-platypus1k-gguf/blob/main/mistral-7b-platypus1k.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
pipeline_tag: text-generation
language:
- en
---
# mistral-7b-v0.1-platypus1k
**mistral-7b-v0.1-platypus1k** is an instruction fine-tuned model based on the Mistral-7B transformer architecture.
### Benchmark Metrics
| Metric | mistral-7b-v0.1-platypus1k | mistralai/Mistral-7B-v0.1 |garage-bAInd/Platypus2-7B|
|-----------------------|-------|-------|-------|
| Avg. | **63.66** | 62.4 |56.13|
| ARC (25-shot) | **61.60** | 59.98|55.20|
| HellaSwag (10-shot) | 82.93 |**83.31** |78.84|
| MMLU (5-shot) | 63.16 |**64.16** |49.83|
| TruthfulQA (0-shot) | **46.96** | 42.15 |40.64|
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Luiz G A Alves
* **Model type:** **mistral-7b-v0.1-platypus1k** is an auto-regressive language model based on the Mistral-7B transformer architecture.
* **Language(s)**: English
### How to use:
```python
# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/mistral-7b-v0.1-platypus1k")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])
```
or, you can load the model direclty using:
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lgaalves/mistral-7b-v0.1-platypus1k")
model = AutoModelForCausalLM.from_pretrained("lgaalves/mistral-7b-v0.1-platypus1k")
```
### Training Dataset
`lgaalves/mistral-7b-v0.1-platypus1k` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
### Training Procedure
`lgaalves/mistral-7b-v0.1-platypus1k` was instruction fine-tuned using LoRA on 1 Tesla V100-SXM2-16GB.
### Limitations and bias
Mistral 7B and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__mistral-7b-platypus1k)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 50.74 |
| ARC (25-shot) | 61.6 |
| HellaSwag (10-shot) | 82.93 |
| MMLU (5-shot) | 63.16 |
| TruthfulQA (0-shot) | 46.96 |
| Winogrande (5-shot) | 78.14 |
| GSM8K (5-shot) | 16.38 |
| DROP (3-shot) | 5.99 |
|
mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF | mradermacher | 2024-06-26T20:55:16Z | 420 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:vihangd/Llama-3-8B-Hindi-Sathi-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T11:19:10Z | ---
base_model: vihangd/Llama-3-8B-Hindi-Sathi-v0.1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/vihangd/Llama-3-8B-Hindi-Sathi-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Hindi-Sathi-v0.1-GGUF/resolve/main/Llama-3-8B-Hindi-Sathi-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.