modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf | RichardErkhov | 2024-06-16T07:33:41Z | 417 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-15T17:55:14Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Beyonder-4x7B-v3 - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/Beyonder-4x7B-v3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Beyonder-4x7B-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q2_K.gguf) | Q2_K | 8.24GB |
| [Beyonder-4x7B-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.IQ3_XS.gguf) | IQ3_XS | 9.21GB |
| [Beyonder-4x7B-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.IQ3_S.gguf) | IQ3_S | 9.73GB |
| [Beyonder-4x7B-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q3_K_S.gguf) | Q3_K_S | 9.72GB |
| [Beyonder-4x7B-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.IQ3_M.gguf) | IQ3_M | 9.92GB |
| [Beyonder-4x7B-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q3_K.gguf) | Q3_K | 10.79GB |
| [Beyonder-4x7B-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q3_K_M.gguf) | Q3_K_M | 10.79GB |
| [Beyonder-4x7B-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q3_K_L.gguf) | Q3_K_L | 11.68GB |
| [Beyonder-4x7B-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.IQ4_XS.gguf) | IQ4_XS | 12.15GB |
| [Beyonder-4x7B-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q4_0.gguf) | Q4_0 | 12.69GB |
| [Beyonder-4x7B-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.IQ4_NL.gguf) | IQ4_NL | 12.81GB |
| [Beyonder-4x7B-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q4_K_S.gguf) | Q4_K_S | 12.8GB |
| [Beyonder-4x7B-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q4_K.gguf) | Q4_K | 13.61GB |
| [Beyonder-4x7B-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q4_K_M.gguf) | Q4_K_M | 13.61GB |
| [Beyonder-4x7B-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q4_1.gguf) | Q4_1 | 14.09GB |
| [Beyonder-4x7B-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q5_0.gguf) | Q5_0 | 15.48GB |
| [Beyonder-4x7B-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q5_K_S.gguf) | Q5_K_S | 15.48GB |
| [Beyonder-4x7B-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q5_K.gguf) | Q5_K | 15.96GB |
| [Beyonder-4x7B-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q5_K_M.gguf) | Q5_K_M | 15.96GB |
| [Beyonder-4x7B-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q5_1.gguf) | Q5_1 | 16.88GB |
| [Beyonder-4x7B-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q6_K.gguf) | Q6_K | 18.46GB |
| [Beyonder-4x7B-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Beyonder-4x7B-v3-gguf/blob/main/Beyonder-4x7B-v3.Q8_0.gguf) | Q8_0 | 23.9GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/AlphaMonarch-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/NeuralDaredevil-7B
---

# ๐ฎ Beyonder-4x7B-v3
Beyonder-4x7B-v3 is an improvement over the popular [Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2). It's a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
Special thanks to [beowolx](https://huggingface.co/beowolx) for making the best Mistral-based code model and to [SanjiWatsuki](https://huggingface.co/SanjiWatsuki) for creating one of the very best RP models.
**Try the demo**: https://huggingface.co/spaces/mlabonne/Beyonder-4x7B-v3
## ๐ Applications
This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio).
If you use SillyTavern, you might want to tweak the inference parameters. Here's what LM Studio uses as a reference: `temp` 0.8, `top_k` 40, `top_p` 0.95, `min_p` 0.05, `repeat_penalty` 1.1.
Thanks to its four experts, it's a well-rounded model, capable of achieving most tasks. As two experts are always used to generate an answer, every task benefits from other capabilities, like chat with RP, or math with code.
## โก Quantized models
Thanks [bartowski](https://huggingface.co/bartowski) for quantizing this model.
* **GGUF**: https://huggingface.co/mlabonne/Beyonder-4x7B-v3-GGUF
* **More GGUF**: https://huggingface.co/bartowski/Beyonder-4x7B-v3-GGUF
* **ExLlamaV2**: https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2
## ๐ Evaluation
This model is not designed to excel in traditional benchmarks, as the code and role-playing models generally do not apply to those contexts. Nonetheless, it performs remarkably well thanks to strong general-purpose experts.
### Nous
Beyonder-4x7B-v3 is one of the best models on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)) and significantly outperforms the v2. See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [๐](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 |
| [**mlabonne/Beyonder-4x7B-v3**](https://huggingface.co/mlabonne/Beyonder-4x7B-v3) [๐](https://gist.github.com/mlabonne/3740020807e559f7057c32e85ce42d92) | **61.91** | **45.85** | **76.67** | **74.98** | **50.12** |
| [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [๐](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 |
| [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) [๐](https://gist.github.com/mlabonne/895ff5171e998abfdf2a41a4f9c84450) | 58.29 | 44.79 | 75.05 | 65.68 | 47.65 |
| [mlabonne/Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2) [๐](https://gist.github.com/mlabonne/f73baa140a510a676242f8a4496d05ca) | 57.13 | 45.29 | 75.95 | 60.86 | 46.4 |
| [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) [๐](https://gist.github.com/mlabonne/08b5280c221fbd7f98eb27561ae902a3) | 50.35 | 39.98 | 71.77 | 48.73 | 40.92 |
### EQ-Bench
Beyonder-4x7B-v3 is the best 4x7B model on the EQ-Bench leaderboard, outperforming older versions of ChatGPT and Llama-2-70b-chat. It is very close to Mixtral-8x7B-Instruct-v0.1 and Gemini Pro. Thanks [Sam Paech](https://huggingface.co/sam-paech) for running the eval.

### Open LLM Leaderboard
It's also a strong performer on the Open LLM Leaderboard, significantly outperforming the v2 model.

## ๐งฉ Configuration
```yaml
base_model: mlabonne/AlphaMonarch-7B
experts:
- source_model: mlabonne/AlphaMonarch-7B
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: mlabonne/NeuralDaredevil-7B
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
```
## ๐ณ Model Family Tree

## ๐ป Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Beyonder-4x7B-v3"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Output:
> A Mixture of Experts (MoE) is a neural network architecture that tackles complex tasks by dividing them into simpler subtasks, delegating each to specialized expert modules. These experts learn to independently handle specific problem aspects. The MoE structure combines their outputs, leveraging their expertise for improved overall performance. This approach promotes modularity, adaptability, and scalability, allowing for better generalization in various applications.
|
shoppal/flan-t5-large-product-title-rewrite | shoppal | 2024-06-17T07:29:23Z | 417 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2024-06-17T07:10:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ali-C137/F1H10M-0000 | Ali-C137 | 2024-06-21T12:44:04Z | 417 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-17T17:54:59Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
faceradix/The-Trinity-Coder-7B-Q4_K_M-GGUF | faceradix | 2024-06-24T09:53:47Z | 417 | 1 | transformers | [
"transformers",
"gguf",
"Code Generation",
"Logical Reasoning",
"Problem Solving",
"Text Generation",
"AI Programming Assistant",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:S-miguel/The-Trinity-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-24T09:53:28Z | ---
base_model: S-miguel/The-Trinity-Coder-7B
language:
- en
library_name: transformers
license: apache-2.0
tags:
- Code Generation
- Logical Reasoning
- Problem Solving
- Text Generation
- AI Programming Assistant
- llama-cpp
- gguf-my-repo
---
# faceradix/The-Trinity-Coder-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`S-miguel/The-Trinity-Coder-7B`](https://huggingface.co/S-miguel/The-Trinity-Coder-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/S-miguel/The-Trinity-Coder-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo faceradix/The-Trinity-Coder-7B-Q4_K_M-GGUF --hf-file the-trinity-coder-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo faceradix/The-Trinity-Coder-7B-Q4_K_M-GGUF --hf-file the-trinity-coder-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo faceradix/The-Trinity-Coder-7B-Q4_K_M-GGUF --hf-file the-trinity-coder-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo faceradix/The-Trinity-Coder-7B-Q4_K_M-GGUF --hf-file the-trinity-coder-7b-q4_k_m.gguf -c 2048
```
|
Helsinki-NLP/opus-mt-fr-ru | Helsinki-NLP | 2023-08-16T11:37:08Z | 416 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ru
* source languages: fr
* target languages: ru
* OPUS readme: [fr-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ru | 37.9 | 0.585 |
|
rinna/japanese-gpt-neox-small | rinna | 2024-04-03T07:18:32Z | 416 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"japanese",
"gpt-neox",
"lm",
"nlp",
"dataset:cc100",
"dataset:Wikipedia",
"dataset:mc4",
"arxiv:2101.00190",
"arxiv:2404.01657",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-08-31T05:58:25Z | ---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
tags:
- ja
- japanese
- gpt-neox
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
- Wikipedia
- mc4
inference: false
---
# japanese-gpt-neox-small

This repository provides a small-sized Japanese GPT-NeoX model. The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
# Update log
* 2023/03/20 Update the model weight and config files such that it can be loaded via Huggingface's official GPT-NeoX implementation.
# How to use the model
~~~~
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-small", use_fast=False)
model = GPTNeoXForCausalLM.from_pretrained("rinna/japanese-gpt-neox-small")
~~~~
# Model architecture
A 12-layer, 768-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz), [Japanese C4](https://huggingface.co/datasets/mc4), and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
# A toy prefix-tuning weight file
Along with pretrained model, we also release a [prefix-tuning](https://arxiv.org/abs/2101.00190) weight file named `smileface_suffix.task0.weight` for demonstration. The toy prefix-tuning weights here is trained to encourage the model to end every generated sentence with a smiling face emoji ๐. Find the training/inference code for prefix-tuning at our Github repo [prefix-tuning-gpt](https://github.com/rinnakk/prefix-tuning-gpt).
Here are a few samples generated with and without the toy prefix weights, respectively.
3 samples without the prefix weights
> 1. ใใใฃใจใใใฏ็ตถๅฏพ้้ใฃใฆใชใใญใ ใใใใซใฏ5ใๅฝ่ชใซ4ใคใฎๅคๅฝ่ชใฎๆๅณใชใใฆใใใใชใใ ใงใใใจใใใใใใฎ็ฐกๅใช่ฑๆใใฉใใชๆๅณใๆใคใฎใ็ฅใใใใใญ!ใ
> 2. 25ๅ้ ใซๅ
ฌๅใซ็ใใฆใใใณใใซๅบงใฃใฆๅพ
ใฃใฆใใใจใใพใใใฆใSๅ
็ใใ้ฃ็ตกใๅ
ฅใใพใใใ ็ขบใใๅๅพใฎ็คผๆใฎๆใซ่ชๅใฎๆใฃใฆใใใๅผๅฝใ้ฃในใ่จๆถใ้ฎฎๆใซๆฎใฃใฆใใพใใ ๅพใงใคใณใฟใผใใใใงๆค็ดขใใใใSๅ
็ใฎใใญใฐใซ้ฃใณใพใใใ ไปๆฅใฎๆฉใใฏใใฏ็ผใใในใไฝใฃใฆใฟใพใใ! * ไธใฎๅ็ใฏๆจๆฅใฎๆ็ผใใงใใ
> 3. CTใงๆญฏๅฝขใใงใใฆใใใฎๅพใใใซใใฎๆญฏๅฝขใๅใณๅใใใใใซใชใใฎใฏใไฝใๅๅ ใ ใใ? ่ซๆญฏใซใชใฃใๅๅ ใใๅฃ่ญใใช? ใใใจใๆญฏๅจ็
ใใช? ๆญฏ็ณใใจใใใพใงใใใใใใกใใฃใจใใใใใใ ๅญไพใฎ่ซๆญฏใฃใฆใใชใใชใๆฒปใใชใใงใใใญใ่ฆชๅ
ๅผใงไฝๅบฆใใ ๅญไพใฎๆญฏๆ นใฏใ่ฆชใฎใใฎใซใชใใพใใ ใใใฆ่ชๅใฎใใฎใ ใฃใใใ็ฅใใชใ้ใซๆใใใใใ็ใใฆใใใใใใพใใ ๅคงไบบใซใชใฃใฆ่ฆชใใใฟใๅ ดๅใฏใ็ฝใๆญฏใซๅคใใฃใฆใใฆใ้ๅฑใฎใใใผใงใๆชใใชใใ่ฆชใใใฎใใๆญฏใฎๅฟ้
ใฏใชใใงใใใญใ
3 samples with the prefix weights:
> 1. โปๆตทๅคใใฉใณใๅใฎๅ ดๅใฏใ่ฟๅใป่ฟ้็ญใฏใๅใ่ดใใใญใพใใฎใงไบใใไบๆฟ้กใใพใใ โป ๅๅ็บ้ๅพใใๅฎขๆงใธๅๅ่ฟ้ๅฎไบใพใงใฎในใใผใใ้่ฆใใๆนใฏๆตทๅคใใฉใณใๅใๅ
ใซ้ใไปใใใใฆ้ ใ ใฑใผในใใใใใพใใ ๐
> 2. ็งใฏ้ๅปใซๆใฃใฆใใไธๅ็ฃใใไธญๅคไฝๅฎ
ใจใใฆๅฃฒๅดใใฆใใพใใใใใใฎๅพใฎ็งใฎ็ถๆณใฏใฉใใ ใฃใใฎใงใใใใ? ๐ ็ตๆใจใใฆใฏใๆ่ณ็ฉไปถใจใใฆๅฃฒๅดใ่ใใฆใใพใใใไปใพใงใฎ็ธๅ ดใ่ชญใใงใใใ ใใฐใใใใจๆใใพใใ ๐ ไปใพใงใ็ฉไปถใซๅฏพใใฆใฎๆ่ณใฏ้ๅธธใซๆงใใใซใใฆใใใฎใงใใใไปๅใฎๆๆกใ่ชญใใงใๅฎ้ใซ็ฉไปถใ่ณผๅ
ฅใใ้ใซใฏใใกใใจ็ขบ่ชใใใใใจๆใใพใใ ๐
> 3. ใใฎๅ็้ใฎ่กจ็ดใใใฎๅฐ็ดใซใใฆใใไฝๅฎถใใใฏใใพใใง่ชฐใใฎๆ็คบใๅใใฆ่กๅใใฆใใไบบ็ฉใฎใใใซ่ฆใใใใจใใใฎใใใใฎไฝๅใใใถใซใใใ ใๆฎบใๅฑ้ๅฃใใฎๆใใฆใใไฝๅใงใใใใใซๆ ใใพใใ ๐
# Inference with FasterTransformer
After version 5.1, [NVIDIA FasterTransformer](https://github.com/NVIDIA/FasterTransformer) now supports both inference for GPT-NeoX and a variety of soft prompts (including prefix-tuning). The released pretrained model and prefix weights in this repo have been verified to work with FasterTransformer 5.1.
# How to cite
~~~
@misc{rinna-japanese-gpt-neox-small,
title = {rinna/japanese-gpt-neox-small},
author = {Zhao, Tianyu and Sawada, Kei}
url = {https://huggingface.co/rinna/japanese-gpt-neox-small},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
~~~
# Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
timm/coat_lite_medium_384.in1k | timm | 2023-04-24T03:43:08Z | 416 | 0 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.06399",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-24T03:42:45Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coat_lite_medium_384.in1k
A CoaT (Co-Scale Conv-Attentional Transformer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.6
- GMACs: 28.7
- Activations (M): 116.7
- Image size: 384 x 384
- **Papers:**
- Co-Scale Conv-Attentional Image Transformers: https://arxiv.org/abs/2104.06399
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mlpc-ucsd/CoaT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coat_lite_medium_384.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coat_lite_medium_384.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 145, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{Xu_2021_ICCV,
author = {Xu, Weijian and Xu, Yifan and Chang, Tyler and Tu, Zhuowen},
title = {Co-Scale Conv-Attentional Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {9981-9990}
}
```
|
nerijs/lego-brickheadz-xl | nerijs | 2023-08-14T05:20:10Z | 416 | 20 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2023-08-14T05:18:21Z | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: lego brickheadz
widget:
- text: picture of a lego brickheadz of a corgi
---
# LEGO Minifig XL
## Consider supporting further research on [Patreon](https://www.patreon.com/user?u=29466374) or [Twitter](https://twitter.com/nerijs)
 |
TheBloke/Llama-2-13B-AWQ | TheBloke | 2023-11-09T18:21:13Z | 416 | 12 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"base_model:meta-llama/Llama-2-13b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-09-18T23:56:32Z | ---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 13B
base_model: meta-llama/Llama-2-13b-hf
inference: false
model_creator: Meta
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 13B - AWQ
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf)
<!-- description start -->
## Description
This repo contains AWQ model files for [Meta's Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-13b-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama-2-13B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-13B-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Llama-2-13B-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Llama-2-13B-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Meta's Llama 2 13B
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaโs sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software โbug,โ or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
TheBloke/agentlm-13B-GGUF | TheBloke | 2023-10-20T22:31:05Z | 416 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:THUDM/AgentInstruct",
"arxiv:2310.12823",
"base_model:THUDM/agentlm-13b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-20T22:24:43Z | ---
base_model: THUDM/agentlm-13b
datasets:
- THUDM/AgentInstruct
inference: false
license: llama2
model_creator: Knowledge Engineering Group (KEG
model_name: AgentLM 13B
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant.
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# AgentLM 13B - GGUF
- Model creator: [Knowledge Engineering Group (KEG](https://huggingface.co/THUDM)
- Original model: [AgentLM 13B](https://huggingface.co/THUDM/agentlm-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Knowledge Engineering Group (KEG's AgentLM 13B](https://huggingface.co/THUDM/agentlm-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/agentlm-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/agentlm-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/agentlm-13B-GGUF)
* [Knowledge Engineering Group (KEG's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/THUDM/agentlm-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: THUDM-Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant.
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [agentlm-13b.Q2_K.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [agentlm-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [agentlm-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [agentlm-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [agentlm-13b.Q4_0.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [agentlm-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [agentlm-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [agentlm-13b.Q5_0.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [agentlm-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [agentlm-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [agentlm-13b.Q6_K.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [agentlm-13b.Q8_0.gguf](https://huggingface.co/TheBloke/agentlm-13B-GGUF/blob/main/agentlm-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/agentlm-13B-GGUF and below it, a specific filename to download, such as: agentlm-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/agentlm-13B-GGUF agentlm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/agentlm-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/agentlm-13B-GGUF agentlm-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m agentlm-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant.\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/agentlm-13B-GGUF", model_file="agentlm-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Knowledge Engineering Group (KEG's AgentLM 13B
## AgentLM-13B
<p align="center">
๐ค <a href="https://huggingface.co/datasets/THUDM/AgentInstruct" target="_blank">[Dataset] </a> โข ๐ป <a href="https://github.com/THUDM/AgentTuning" target="_blank">[Github Repo]</a> โข ๐ <a href="https://THUDM.github.io/AgentTuning/" target="_blank">[Project Page]</a> โข ๐ <a href="https://arxiv.org/abs/2310.12823" target="_blank">[Paper]</a>
</p>
**AgentTuning** represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM.
## Models
**AgentLM** models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.
The models follow the conversation format of [Llama-2-chat](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), with system prompt fixed as
```
You are a helpful, respectful and honest assistant.
```
7B, 13B, and 70B models are available on Huggingface model hub.
|Model|Huggingface Repo|
|---|---|
|AgentLM-7B| [๐คHuggingface Repo](https://huggingface.co/THUDM/agentlm-7b) |
|AgentLM-13B| [๐คHuggingface Repo](https://huggingface.co/THUDM/agentlm-13b) |
|AgentLM-70B| [๐คHuggingface Repo](https://huggingface.co/THUDM/agentlm-70b) |
## Citation
If you find our work useful, please consider citing AgentTuning:
```
@misc{zeng2023agenttuning,
title={AgentTuning: Enabling Generalized Agent Abilities for LLMs},
author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang},
year={2023},
eprint={2310.12823},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
mradermacher/Marengoli_7B_SLERP-GGUF | mradermacher | 2024-05-06T05:59:38Z | 416 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"louisgrc/Rivoli_7B_SLERP",
"louisgrc/Marengo_7B_SLERP",
"en",
"base_model:louisgrc/Marengoli_7B_SLERP",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-25T08:08:27Z | ---
base_model: louisgrc/Marengoli_7B_SLERP
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- louisgrc/Rivoli_7B_SLERP
- louisgrc/Marengo_7B_SLERP
---
## About
static quants of https://huggingface.co/louisgrc/Marengoli_7B_SLERP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Marengoli_7B_SLERP-GGUF/resolve/main/Marengoli_7B_SLERP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/dolphin-2.8-mistral-7b-v02-GGUF | QuantFactory | 2024-04-07T16:20:23Z | 416 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"conversational",
"text-generation-inference",
"text-generation",
"en",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-04-04T17:35:41Z | ---
language:
- en
license: apache-2.0
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
library_name: transformers
pipeline_tag: text-generation
inference: false
tags:
- mistral
- conversational
- text-generation-inference
---
# Dolphin 2.8 Mistral 7b v0.2- GGUF
- This is GGUF quantized version, created using llama.cpp
- Original model: [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
## Description
This model is based on Mistral-7b-v0.2.
The base model has 32k context, and the full-weights fine-tune was with 16k sequence lengths.
Dolphin-2.8 has a variety of instruction, conversational, and coding skills.
Dolphin is uncensored. The dataset was filtered by the creators to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. You are responsible for any content you create using this model.
Dolphin is licensed Apache 2.0. The creators grant permission for any use including commercial. Dolphin was trained on data generated from GPT4 among other models.
|
mradermacher/GPT4-X-Alpasta-30b-i1-GGUF | mradermacher | 2024-05-06T05:11:08Z | 416 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MetaIX/GPT4-X-Alpasta-30b",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-07T02:41:28Z | ---
base_model: MetaIX/GPT4-X-Alpasta-30b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/MetaIX/GPT4-X-Alpasta-30b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-X-Alpasta-30b-i1-GGUF/resolve/main/GPT4-X-Alpasta-30b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mixtral_AI_MediTron-GGUF | mradermacher | 2024-05-06T05:02:54Z | 416 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:LeroyDyer/Mixtral_AI_MediTron",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-10T23:05:23Z | ---
base_model: LeroyDyer/Mixtral_AI_MediTron
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_MediTron
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_MediTron-GGUF/resolve/main/Mixtral_AI_MediTron.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/Llama-3-11B-Instruct-v0.1 | MaziyarPanahi | 2024-04-19T18:10:35Z | 416 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-19T09:38:45Z | ---
base_model: "meta-llama/Meta-Llama-3-8B-Instruct"
library_name: transformers
tags:
- mergekit
- merge
- facebook
- meta
- pytorch
- llama
- llama-3
language:
- en
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
inference: false
model_creator: MaziyarPanahi
model_name: Llama-3-11B-Instruct-v0.1
quantized_by: MaziyarPanahi
---
<img src="./llama-3-merges.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Llama-3-11B-Instruct-v0.1
This model is a self-merge of `meta-llama/Meta-Llama-3-8B-Instruct` model.
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-11B-Instruct-v0.1` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-11B-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Prompt template
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
what's 25-4*2+3<|eot_id|><|start_header_id|>assistant<|end_header_id|>
To evaluate this expression, we need to follow the order of operations (PEMDAS):
1. First, multiply 4 and 2: 4*2 = 8
2. Then, subtract 8 from 25: 25 - 8 = 17
3. Finally, add 3: 17 + 3 = 20
So, 25-4*2+3 = 20!<|eot_id|>
To evaluate this expression, we need to follow the order of operations (PEMDAS):
1. First, multiply 4 and 2: 4*2 = 8
2. Then, subtract 8 from 25: 25 - 8 = 17
3. Finally, add 3: 17 + 3 = 20
So, 25-4*2+3 = 20!
``` |
HuggingFaceFW/ablation-model-the-pile | HuggingFaceFW | 2024-04-25T08:34:47Z | 416 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-21T13:12:28Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tanganke/clip-vit-base-patch32_sun397 | tanganke | 2024-04-28T20:22:05Z | 416 | 0 | transformers | [
"transformers",
"safetensors",
"clip_vision_model",
"feature-extraction",
"dataset:tanganke/sun397",
"base_model:openai/clip-vit-base-patch32",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-04-28T20:21:15Z | ---
base_model:
- openai/clip-vit-base-patch32
datasets:
- tanganke/sun397
metrics:
- accuracy
---
# Model Card
## Model Details
- Architecture: ViT-Base with patch size 32
- Training Data: Sun397
## Training Details
Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32).
Only the vision encoder is fine-tuned.
## Evaluation Results
- pre-trained: 0.63209068775177
- fine-tuned: 0.7501258850097656
## Usage
load vision model
```python
from transformers import CLIPVisionModel
vision_model = CLIPVisionModel.from_pretrained('tanganke/clip-vit-base-patch32_sun397')
```
substitute the vision encoder of clip
```python
from transformers import CLIPModel
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
clip_model.vision_model.load_state_dict(vision_model.vision_model.state_dict())
```
|
LiteLLMs/dolphin-2.9-llama3-8b-GGUF | LiteLLMs | 2024-04-29T14:27:42Z | 416 | 0 | null | [
"gguf",
"generated_from_trainer",
"axolotl",
"GGUF",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
]
| null | 2024-04-29T13:17:04Z |
---
license: other
tags:
- generated_from_trainer
- axolotl
- GGUF
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- HuggingFaceH4/ultrachat_200k
- microsoft/orca-math-word-problems-200k
- abacusai/SystemChat-1.1
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
model-index:
- name: out
results: []
quantized_by: andrijdavid
---
# dolphin-2.9-llama3-8b-GGUF
- Original model: [dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/dolphin-2.9-llama3-8b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/dolphin-2.9-llama3-8b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: dolphin-2.9-llama3-8b
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dolphin 2.9 Llama 3 8b ๐ฌ
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
Discord: https://discord.gg/8fbBeC7ZGx
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
My appreciation for the sponsors of Dolphin 2.9:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
It took 2.5 days on 8x L40S provided by Crusoe Cloud
This model was trained FFT on all parameters, using ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
load_in_8bit: false
load_in_4bit: false
strict: false
model_config:
datasets:
- path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl
type: sharegpt
conversation: chatml
chat_template: chatml
dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy
val_set_size: 0.0002
output_dir: ./out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
gradient_accumulation_steps: 4
micro_batch_size: 3
num_epochs: 3
logging_steps: 1
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
wandb_project: dolphin-2.9-mixtral-8x22b
wandb_watch:
wandb_run_id:
wandb_log_model:
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
saves_per_epoch: 4
save_total_limit: 2
save_steps:
evals_per_epoch: 4
eval_sample_packing: false
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
pad_token: "<|end_of_text|>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
```
</details><br>
## Quants
GGUF : https://huggingface.co/QuantFactory/dolphin-2.9-llama3-8b-GGUF
GGUF with imatrix: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF
Exllamav2: https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-exl2
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 7
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
| :-: | :--: | :-: |
| 1.146 | 0.0005 | 1 | 1.1064 |
| 0.6962 | 0.2501 | 555 | 0.6636 |
| 0.6857 | 0.5001 | 1110 | 0.6503 |
| 0.6592 | 0.7502 | 1665 | 0.6419 |
| 0.6465 | 1.0002 | 2220 | 0.6317 |
| 0.5295 | 1.2395 | 2775 | 0.6408 |
| 0.5302 | 1.4895 | 3330 | 0.6351 |
| 0.5188 | 1.7396 | 3885 | 0.6227 |
| 0.521 | 1.9896 | 4440 | 0.6168 |
| 0.3968 | 2.2289 | 4995 | 0.6646 |
| 0.3776 | 2.4789 | 5550 | 0.6619 |
| 0.3983 | 2.7290 | 6105 | 0.6602 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
<!-- original-model-card end -->
|
arthrod/cicerollamatry8 | arthrod | 2024-06-01T10:14:11Z | 416 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-04T02:50:04Z | ---
library_name: transformers
tags: []
model-index:
- name: cicerollamatry8
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 68.44
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 57.02
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 48.2
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 90.36
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 76.44
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 75.99
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 85.26
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 54.99
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 61.91
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=arthrod/cicerollamatry8
name: Open Portuguese LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/arthrod/cicerollamatry8) and on the [๐ Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**68.74**|
|ENEM Challenge (No Images)| 68.44|
|BLUEX (No Images) | 57.02|
|OAB Exams | 48.20|
|Assin2 RTE | 90.36|
|Assin2 STS | 76.44|
|FaQuAD NLI | 75.99|
|HateBR Binary | 85.26|
|PT Hate Speech Binary | 54.99|
|tweetSentBR | 61.91|
|
second-state/Llama-3-8B-Japanese-Instruct-GGUF | second-state | 2024-05-14T06:42:38Z | 416 | 3 | null | [
"gguf",
"text-generation",
"en",
"ja",
"base_model:haqishen/Llama-3-8B-Japanese-Instruct",
"license:other",
"region:us"
]
| text-generation | 2024-05-14T05:37:53Z | ---
license: other
license_name: llama3
base_model: haqishen/Llama-3-8B-Japanese-Instruct
inference: false
model_creator: haqishen
model_type: llama
pipeline_tag: text-generation
quantized_by: Second State Inc.
language:
- en
- ja
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama-3-8B-Japanese-Instruct-GGUF
## Original Model
[haqishen/Llama-3-8B-Japanese-Instruct](https://huggingface.co/haqishen/Llama-3-8B-Japanese-Instruct)
## Run with LlamaEdge
- LlamaEdge version: [v0.10.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.10.1) and above
- Prompt template
- Prompt type: `llama-3-chat`
- Prompt string
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
- Context size: `4096`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template llama-3-chat \
--ctx-size 4096 \
--model-name Llama-3-8B-Japanese-Instruct \
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template llama-3-chat \
--ctx-size 4096
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Llama-3-8B-Japanese-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q2_K.gguf) | Q2_K | 2 | 3.18 GB| smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 4.32 GB| small, substantial quality loss |
| [Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 4.02 GB| very small, high quality loss |
| [Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 3.66 GB| very small, high quality loss |
| [Llama-3-8B-Japanese-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_0.gguf) | Q4_0 | 4 | 4.66 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 4.92 GB| medium, balanced quality - recommended |
| [Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 4.69 GB| small, greater quality loss |
| [Llama-3-8B-Japanese-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss - recommended |
| [Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended |
| [Llama-3-8B-Japanese-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
| [Llama-3-8B-Japanese-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended |
| [Llama-3-8B-Japanese-Instruct-f16.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-f16.gguf) | f16 | 16 | 16.1 GB| |
*Quantized with llama.cpp b2824.*
|
RichardErkhov/dfurman_-_LLaMA-7B-gguf | RichardErkhov | 2024-05-26T12:41:11Z | 416 | 0 | null | [
"gguf",
"arxiv:2302.13971",
"region:us"
]
| null | 2024-05-26T10:41:59Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LLaMA-7B - GGUF
- Model creator: https://huggingface.co/dfurman/
- Original model: https://huggingface.co/dfurman/LLaMA-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LLaMA-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q2_K.gguf) | Q2_K | 2.36GB |
| [LLaMA-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [LLaMA-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [LLaMA-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [LLaMA-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [LLaMA-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q3_K.gguf) | Q3_K | 3.07GB |
| [LLaMA-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [LLaMA-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [LLaMA-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [LLaMA-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q4_0.gguf) | Q4_0 | 3.56GB |
| [LLaMA-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [LLaMA-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [LLaMA-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q4_K.gguf) | Q4_K | 3.8GB |
| [LLaMA-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [LLaMA-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q4_1.gguf) | Q4_1 | 3.95GB |
| [LLaMA-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q5_0.gguf) | Q5_0 | 4.33GB |
| [LLaMA-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [LLaMA-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q5_K.gguf) | Q5_K | 4.45GB |
| [LLaMA-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [LLaMA-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q5_1.gguf) | Q5_1 | 4.72GB |
| [LLaMA-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q6_K.gguf) | Q6_K | 5.15GB |
| [LLaMA-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/dfurman_-_LLaMA-7B-gguf/blob/main/LLaMA-7B.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
pipeline_tag: text-generation
license: other
---
<div align="center">
<img src="./assets/llama.png" width="150px">
</div>
# LLaMA-7B
LLaMA-7B is a base model for text generation with 6.7B parameters and a 1T token training corpus. It was built and released by the FAIR team at Meta AI alongside the paper "[LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)".
This model repo was converted to work with the transformers package. It is under a bespoke **non-commercial** license, please see the [LICENSE](https://huggingface.co/dfurman/llama-7b/blob/main/LICENSE) file for more details.
## Model Summary
- **Model Type:** Causal decoder-only.
- **Dataset:** The model was trained on 1T tokens using the following data sources: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%].
- **Language(s):** The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk.
- **License:** Bespoke non-commercial license, see [LICENSE](https://huggingface.co/dfurman/llama-7b/blob/main/LICENSE) file.
- **Model date:** LLaMA was trained between Dec 2022 and Feb 2023.
**Where to send inquiries about the model:**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project, by opening an issue.
## Intended use
**Primary intended uses:**
The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, and hallucinations.
**Primary intended users:**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases:**
LLaMA is a base model, also known as a foundation model. As such, it should not be used on downstream applications without further risk evaluation, mitigation, and additional fine-tuning. In particular, the model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors:**
One of the most relevant factors for which model performance may vary is which language is used. Although 20 languages were included in the training data, most of the LLaMA dataset is made of English text, and the model is thus expected to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, which is likely also the case for LLaMA.
**Evaluation factors:**
As LLaMA is trained on data from the Web, it is expected that the model reflects biases from this source. The RAI datasets are thus used to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. The toxicity of model generations is also measured, depending on the toxicity of the context used to prompt the model.
## Ethical considerations
**Data:**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. LLaMA is thus expected to exhibit such biases from the training data.
**Human life:**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations:**
The data was filtered from the Web based on its proximity to Wikipedia text and references. For this, the Kneser-Ney language model is used with a fastText linear classifier.
**Risks and harms:**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. LLaMA is not expected to be an exception in this regard.
**Use cases:**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
## How to Get Started with the Model
### Setup
```python
!pip install -q -U transformers accelerate torch
```
### GPU Inference in fp16
This requires a GPU with at least 15GB of VRAM.
### First, Load the Model
```python
import transformers
import torch
model_name = "dfurman/llama-7b"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
streamer = transformers.TextStreamer(tokenizer)
model = transformers.LlamaForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
```
### Next, Run the Model
```python
prompt = "An increasing sequence: one,"
inputs = tokenizer(
prompt,
padding=True,
truncation=True,
return_tensors='pt',
return_token_type_ids=False,
).to("cuda")
_ = model.generate(
**inputs,
max_new_tokens=20,
streamer=streamer,
)
```
|
mradermacher/quill-72b-instruct-GGUF | mradermacher | 2024-05-30T21:36:15Z | 416 | 18 | transformers | [
"transformers",
"gguf",
"en",
"base_model:billyjoe/quill-72b-instruct",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-29T11:23:46Z | ---
base_model: billyjoe/quill-72b-instruct
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: qianwen-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/billyjoe/quill-72b-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.IQ3_XS.gguf) | IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.IQ3_S.gguf) | IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.IQ3_M.gguf) | IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.SOURCE.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.SOURCE.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/quill-72b-instruct-GGUF/resolve/main/quill-72b-instruct.SOURCE.gguf.part3of3) | SOURCE | 145.5 | source gguf, only provided when it was hard to come by |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mnoukhov/pythia410m-rm-tldr | mnoukhov | 2024-06-02T23:53:06Z | 416 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:mnoukhov/pythia410m-sft-tldr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2024-06-01T23:29:49Z | ---
license: apache-2.0
base_model: mnoukhov/pythia410m-sft-tldr
tags:
- trl
- reward-trainer
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pythia410m-rm-tldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia410m-rm-tldr
This model is a fine-tuned version of [mnoukhov/pythia410m-sft-tldr](https://huggingface.co/mnoukhov/pythia410m-sft-tldr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4313
- Accuracy: 0.7928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5145 | 0.2006 | 291 | 0.4726 | 0.7659 |
| 0.4356 | 0.4011 | 582 | 0.4641 | 0.7730 |
| 0.3823 | 0.6017 | 873 | 0.4456 | 0.7860 |
| 0.3616 | 0.8022 | 1164 | 0.4313 | 0.7928 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
eccheng/Phi-3-mini-128k-instruct | eccheng | 2024-06-10T20:15:46Z | 416 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-10T20:13:39Z | Entry not found |
Ali-C137/L2H10M-0000 | Ali-C137 | 2024-06-21T12:42:20Z | 416 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-17T21:55:35Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CHE-72/TAIDE-LX-7B-Chat-Q5_K_S-GGUF | CHE-72 | 2024-06-22T17:19:52Z | 416 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
]
| null | 2024-06-22T17:19:32Z | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: ๆจ้่ฆๅ
ๅๆๆๆฌๆขๆฌพๆ่ฝไฝฟ็จๆญคๆจกๅ
extra_gated_fields:
ๅงๅ(Name): text
็ๆฅ(Date of birth): date_picker
ๅๅฎถ(Country): country
ๆๅฑฌๅฎไฝ(Affiliation): text
geo: ip_location
ๆไธ้ๅบ่กจ็คบๆจๅๆ็คพ็พคๆๆฌๅๆๆธ่ๅไบบ่ณๆ่้ๅ็ฅ่ฒๆ(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L ้กๆจกๅ็คพ็พคๆๆฌๅๆๆธ(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [ๅไบบ่ณๆ่้ๅ็ฅ่ฒๆ(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: ้ๅบ(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q5_K_S-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_K_S-GGUF --hf-file taide-lx-7b-chat-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_K_S-GGUF --hf-file taide-lx-7b-chat-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_K_S-GGUF --hf-file taide-lx-7b-chat-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q5_K_S-GGUF --hf-file taide-lx-7b-chat-q5_k_s.gguf -c 2048
```
|
John6666/ioli-pony-mix-v2-sdxl-spo | John6666 | 2024-06-22T21:35:10Z | 416 | 2 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-22T21:27:10Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://civitai.com/models/517435?modelVersionId=574987).
|
cambridgeltl/BioRedditBERT-uncased | cambridgeltl | 2023-04-05T15:51:20Z | 415 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"BioNLP",
"social_media",
"en",
"arxiv:2010.03295",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- BioNLP
- social_media
---
# BioRedditBERT
## Model description
BioRedditBERT is a BERT model initialised from BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) and further pre-trained on health-related Reddit posts. Please view our paper [COMETA: A Corpus for Medical Entity Linking in the Social Media](https://arxiv.org/pdf/2010.03295.pdf) (EMNLP 2020) for more details.
## Training data
We crawled all threads from 68 health themed subreddits such as `r/AskDocs`, `r/health` and etc. starting from the beginning of 2015 to the end of 2018, obtaining a collection of more than
800K discussions. This collection was then pruned by removing deleted posts, comments from bots or moderators, and so on. In the end, we obtained the training corpus with ca. 300 million tokens and a vocabulary
size of ca. 780,000 words.
## Training procedure
We use the same pre-training script in the original [google-research/bert](https://github.com/google-research/bert) repo. The model is initialised with [`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`](https://github.com/dmis-lab/biobert).
We train with a batch size of 64, a max sequence length of 64, a learning rate of `2e-5` for 100k steps on two GeForce GTX 1080Ti (11 GB) GPUs. Other hyper-parameters are the same as default.
## Eval results
To show the benefit from further pre-training on the social media domain, we demonstrate results on a medical entity linking dataset also in the social media: [AskAPatient](https://zenodo.org/record/55013#.X4ncRmTYpb8) [(Limsopatham and Collier 2016)](https://www.aclweb.org/anthology/P16-1096.pdf).
We follow the same 10-fold cross-validation procedure for all models and report the average result without fine-tuning. `[CLS]` is used as representations for entity mentions (we also tried average of all tokens but found `[CLS]` generally performs better).
Model | Accuracy@1 | Accuracy@5
-------|---------|---------
[BERT-base-uncased](https://huggingface.co/bert-base-uncased) | 38.2 | 43.3
[BioBERT v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) | 41.4 | 51.5
[ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) | 43.9 | 54.3
[BlueBERT](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/NCBI_BERT_pubmed_mimic_uncased_L-12_H-768_A-12.zip) | 41.5 | 48.5
[SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) | 42.3 | 51.9
[PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) | 42.5 | 49.6
BioRedditBERT | **44.3** | **56.2**
### BibTeX entry and citation info
```bibtex
@inproceedings{basaldella-2020-cometa,
title = "{COMETA}: A Corpus for Medical Entity Linking in the Social Media",
author = "Basaldella, Marco and Liu, Fangyu, and Shareghi, Ehsan, and Collier, Nigel",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2020",
publisher = "Association for Computational Linguistics"
}
```
|
timm/maxvit_base_tf_512.in21k_ft_in1k | timm | 2023-05-10T23:59:59Z | 415 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2204.01697",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-12-02T21:50:28Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for maxvit_base_tf_512.in21k_ft_in1k
An official MaxViT image classification model. Pretrained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) and fine-tuned on ImageNet-1k by paper authors.
Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 119.9
- GMACs: 138.0
- Activations (M): 704.0
- Image size: 512 x 512
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_base_tf_512.in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_base_tf_512.in21k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 256, 256])
# torch.Size([1, 96, 128, 128])
# torch.Size([1, 192, 64, 64])
# torch.Size([1, 384, 32, 32])
# torch.Size([1, 768, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_base_tf_512.in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 16, 16) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
facebook/convnextv2-femto-1k-224 | facebook | 2023-09-04T19:39:11Z | 415 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"convnextv2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2301.00808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-17T14:53:54Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXt V2 (femto-sized model)
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-1K dataset at resolution 224x224. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2).
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-femto-1k-224")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-femto-1k-224")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2301-00808,
author = {Sanghyun Woo and
Shoubhik Debnath and
Ronghang Hu and
Xinlei Chen and
Zhuang Liu and
In So Kweon and
Saining Xie},
title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders},
journal = {CoRR},
volume = {abs/2301.00808},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.00808},
doi = {10.48550/arXiv.2301.00808},
eprinttype = {arXiv},
eprint = {2301.00808},
timestamp = {Tue, 10 Jan 2023 15:10:12 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
timm/resnetv2_101.a1h_in1k | timm | 2024-02-10T23:35:33Z | 415 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1603.05027",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-22T21:06:33Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for resnetv2_101.a1h_in1k
A ResNet-V2 (pre-activation ResNet) image classification model. Trained on ImageNet-1k by Ross Wightman in `timm` using ResNet strikes back (RSB) `A1` based recipe.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.5
- GMACs: 7.8
- Activations (M): 16.2
- Image size: 224 x 224
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Identity Mappings in Deep Residual Networks: https://arxiv.org/abs/1603.05027
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnetv2_101.a1h_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetv2_101.a1h_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetv2_101.a1h_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@article{He2016,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Identity Mappings in Deep Residual Networks},
journal = {arXiv preprint arXiv:1603.05027},
year = {2016}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
redstonehero/dreamshaper-inpainting | redstonehero | 2023-04-23T18:00:38Z | 415 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-04-23T07:23:48Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
--- |
sanchit-gandhi/clap-htsat-unfused-m-full | sanchit-gandhi | 2023-04-26T08:58:17Z | 415 | 1 | transformers | [
"transformers",
"pytorch",
"clap",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-04-26T08:41:23Z | Entry not found |
nicholasKluge/Aira-2-portuguese-124M | nicholasKluge | 2024-06-18T11:23:52Z | 415 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"pt",
"dataset:nicholasKluge/instruct-aira-dataset",
"arxiv:1803.05457",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-06-10T23:50:58Z | ---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- pt
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>Vocรช pode me explicar o que รฉ Aprendizagem de Mรกquina?<|endofinstruction|>"
example_title: Aprendizagem de Mรกquina
- text: "<|startofinstruction|>Vocรช sabe alguma coisa sobre รtica das Virtudes?<|endofinstruction|>"
example_title: รtica
- text: "<|startofinstruction|>Como eu posso fazer a minha namorada feliz?<|endofinstruction|>"
example_title: Conselho
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 350
source: CodeCarbon
training_type: fine-tuning
geographical_location: Singapore
hardware_used: NVIDIA A100-SXM4-40GB
---
# Aira-2-portuguese-124M
Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-portuguese-124M is an instruction-tuned model based on [GPT-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese). The model was trained with a dataset composed of prompt, completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo-Portuguese).
## Details
- **Size:** 124,441,344 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** Portuguese
- **Number of Epochs:** 5
- **Batch size:** 24
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.35 KgCO2 (Singapore)
- **Total Energy Consumption:** 0.73 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`O que รฉ um modelo de linguagem?`<|endofinstruction|>`Um modelo de linguagem รฉ uma distribuiรงรฃo de probabilidade sobre um vocabulรกrio.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-portuguese-124M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-portuguese-124M')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
add_special_tokens=False,
return_tensors="pt").to(device)
responses = aira.generate(**inputs, num_return_sequences=2)
print(f"Question: ๐ค {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: ๐ค {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>> Question: ๐ค Qual a capital do Brasil?
>>>Response 1: ๐ค A capital do Brasil รฉ Brasรญlia.
>>>Response 2: ๐ค A capital do Brasil รฉ Brasรญlia.
```
## Limitations
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
## Evaluation
| Model | Average | [ARC](https://arxiv.org/abs/1803.05457) | [TruthfulQA](https://arxiv.org/abs/2109.07958) | [ToxiGen](https://arxiv.org/abs/2203.09509) |
|---------------------------------------------------------------------------------------|-----------|-----------------------------------------|------------------------------------------------|---------------------------------------------|
| [Aira-2-portuguese-124M](https://huggingface.co/nicholasKluge/Aira-2-portuguese-124M) | **32.73** | **24.87** | 40.60 | None |
| Gpt2-small-portuguese | 31.96 | 22.48 | **41.44** | None |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). The ToxiGen evaluation was not performed because the task is not available in Portuguese. Thanks to [Laiviet](https://github.com/laiviet/lm-evaluation-harness) for translating some of the tasks in the LM-Evaluation-Harness.
## Cite as ๐ค
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrรชa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
Aira-2-portuguese-124M is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
TheBloke/MythoBoros-13B-GGUF | TheBloke | 2023-09-27T12:52:22Z | 415 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:Gryphe/MythoBoros-13b",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-19T22:35:14Z | ---
language:
- en
license: other
model_name: MythoBoros 13B
base_model: Gryphe/MythoBoros-13b
inference: false
model_creator: Gryphe
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MythoBoros 13B - GGUF
- Model creator: [Gryphe](https://huggingface.co/Gryphe)
- Original model: [MythoBoros 13B](https://huggingface.co/Gryphe/MythoBoros-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Gryphe's MythoBoros 13B](https://huggingface.co/Gryphe/MythoBoros-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MythoBoros-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MythoBoros-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF)
* [Gryphe's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Gryphe/MythoBoros-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mythoboros-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [mythoboros-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [mythoboros-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [mythoboros-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [mythoboros-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mythoboros-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [mythoboros-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [mythoboros-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mythoboros-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [mythoboros-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [mythoboros-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [mythoboros-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MythoBoros-13B-GGUF/blob/main/mythoboros-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MythoBoros-13B-GGUF and below it, a specific filename to download, such as: mythoboros-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MythoBoros-13B-GGUF mythoboros-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MythoBoros-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MythoBoros-13B-GGUF mythoboros-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mythoboros-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MythoBoros-13B-GGUF", model_file="mythoboros-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Gryphe's MythoBoros 13B
## Model details
MythoBoros-13b can be considered a sister model to [MythoLogic-13b](https://huggingface.co/Gryphe/MythoLogic-13b), sharing the same goals but having a different approach.
Whereas the previous model was a series of experimental gradient merges, this one is a simple straight-up 66/34 merge of [Chronos](https://huggingface.co/elinas/chronos-13b) and the freshly released [Ouroboros](https://huggingface.co/CalderaAI/13B-Ouroboros), providing a very solid foundation for a well-performing roleplaying model.
MythoBoros tends to be somewhat more formal with its responses in comparison to MythoLogic.
My advice? Try both, see which one you prefer.
Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoBoros-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoBoros-13B-GPTQ) (You're the best!)
## Prompt Format
This model works best with Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
<!-- original-model-card end -->
|
TheBloke/MAmmoTH-Coder-34B-GGUF | TheBloke | 2023-09-27T12:53:46Z | 415 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:TIGER-Lab/MathInstruct",
"arxiv:2309.05653",
"base_model:TIGER-Lab/MAmmoTH-Coder-34B",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T22:25:59Z | ---
language:
- en
license: mit
datasets:
- TIGER-Lab/MathInstruct
model_name: MAmmoTH Coder 34B
base_model: TIGER-Lab/MAmmoTH-Coder-34B
inference: false
model_creator: TIGER-Lab
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MAmmoTH Coder 34B - GGUF
- Model creator: [TIGER-Lab](https://huggingface.co/TIGER-Lab)
- Original model: [MAmmoTH Coder 34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [TIGER-Lab's MAmmoTH Coder 34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF)
* [TIGER-Lab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [TIGER-Lab's MAmmoTH Coder 34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mammoth-coder-34b.Q2_K.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [mammoth-coder-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [mammoth-coder-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [mammoth-coder-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [mammoth-coder-34b.Q4_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mammoth-coder-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [mammoth-coder-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [mammoth-coder-34b.Q5_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mammoth-coder-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [mammoth-coder-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [mammoth-coder-34b.Q6_K.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [mammoth-coder-34b.Q8_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-Coder-34B-GGUF/blob/main/mammoth-coder-34b.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MAmmoTH-Coder-34B-GGUF and below it, a specific filename to download, such as: mammoth-coder-34b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MAmmoTH-Coder-34B-GGUF mammoth-coder-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MAmmoTH-Coder-34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MAmmoTH-Coder-34B-GGUF mammoth-coder-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mammoth-coder-34b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MAmmoTH-Coder-34B-GGUF", model_file="mammoth-coder-34b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: TIGER-Lab's MAmmoTH Coder 34B
# ๐ฆฃ MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH/](https://tiger-ai-lab.github.io/MAmmoTH/)
Paper: [https://arxiv.org/pdf/2309.05653.pdf](https://arxiv.org/pdf/2309.05653.pdf)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH)
## Introduction
We introduce ๐ฆฃ MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on ๐ค [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields.
| | **Base Model: Llama-2** | **Base Model: Code Llama** |
|-----|---------------------------------------------------------------|--------------------------------------------------------------------------|
| 7B | ๐ฆฃ [MAmmoTH-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | ๐ฆฃ [MAmmoTH-Coder-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) |
| 13B | ๐ฆฃ [MAmmoTH-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) | ๐ฆฃ [MAmmoTH-Coder-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B)|
| 34B | - | ๐ฆฃ [MAmmoTH-Coder-34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B)|
| 70B | ๐ฆฃ [MAmmoTH-70B](https://huggingface.co/TIGER-Lab/MAmmoTH-70B) | - |
|
## Training Data
The models are trained on the ๐ค [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), which is compiled from 13 different math rationale datasets. Check out the dataset card for more details.
## Training Procedure
The models are fine-tuned with the MathInstruct dataset using the original Llama-2 and Code Llama models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| Model | Size | Base | GSM8K | MATH | AQuA | NumGLUE | IID Avg | SVAMP | Mathematics | SimulEq | SAT-Math | MMLU-Math | OOD Avg |
|-------------------|-------|---------------|-----------|-------|-------|-----------|---------------|-----------|---------------|-----------|-----------|---------------|---------------|
| | | | | | | | | | | | | | |
| MAmmoTH | 7B | Llama-2 | 51.7 | 31.2 | 42.9 | 53.1 | 44.7 | 66.7 | 44.8 | 42 | 36.4 | 38.6 | 45.7 |
| MAmmoTH-Coder | 7B | Code-Llama | 58.8 | 35.2 | 43 | 57.1 | 48.5 | 71.1 | 53.9 | 44.6 | 40 | 40.5 | 50.2 |
| MAmmoTH | 13B | Llama-2 | 61.7 | 36 | 44.8 | 59.6 | 50.5 | 72.4 | 48.7 | 40.5 | 42.7 | 45.3 | 49.9 |
| MAmmoTH-Coder | 13B | Code-Llama | 64.3 | 38.6 | 46.1 | 54.2 | 50.8 | 73.2 | 60 | 44.1 | 40.9 | 45.2 | 52.6 |
| MAmmoTH-Coder | 34B | Code-Llama | 72.3 | 46.8 | 50.8 | 59.6 | 57.3 | 84 | 64.7 | 50.6 | 51.8 | 50.2 | 60.3 |
| MAmmoTH | 70B | Llama-2 | 76.7 | 44.2 | 61.4 | 64.3 | 61.7 | 81.7 | 55.3 | 45.3 | 58.6 | 52.3 | 58.6 |
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH)
## Prompt Format
If you want to do CoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
If you want to do PoT:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction} Let's write a program.
### Response:
```
## Intended Uses
These models are trained for research purposes. They are designed to solve general math problems. They can be used in educational software, tutoring systems, or any application where a solution to a math problem is needed. The models can generate both a chain of thought (CoT) rationale and a program of thought (PoT) rationale, providing a comprehensive solution to a given math problem.
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2023mammoth,
title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
```
<!-- original-model-card end -->
|
TheBloke/Pandalyst_13B_V1.0-GGUF | TheBloke | 2023-09-30T14:35:35Z | 415 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"code",
"en",
"base_model:pipizhao/Pandalyst_13B_V1.0",
"license:llama2",
"model-index",
"text-generation-inference",
"region:us"
]
| null | 2023-09-30T14:29:33Z | ---
base_model: pipizhao/Pandalyst_13B_V1.0
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: Pandalyst_13B_v1.0
results:
- metrics:
- name: exec@1
type: exec@1
value: 0.71
verified: false
task:
type: text-generation
model_creator: Yanzhao Zheng
model_name: Pandalyst 13B V1.0
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pandalyst 13B V1.0 - GGUF
- Model creator: [Yanzhao Zheng](https://huggingface.co/pipizhao)
- Original model: [Pandalyst 13B V1.0](https://huggingface.co/pipizhao/Pandalyst_13B_V1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Yanzhao Zheng's Pandalyst 13B V1.0](https://huggingface.co/pipizhao/Pandalyst_13B_V1.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF)
* [Yanzhao Zheng's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pipizhao/Pandalyst_13B_V1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [pandalyst_13b_v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [pandalyst_13b_v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [pandalyst_13b_v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [pandalyst_13b_v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [pandalyst_13b_v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pandalyst_13b_v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [pandalyst_13b_v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [pandalyst_13b_v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pandalyst_13b_v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [pandalyst_13b_v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [pandalyst_13b_v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [pandalyst_13b_v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/Pandalyst_13B_V1.0-GGUF/blob/main/pandalyst_13b_v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Pandalyst_13B_V1.0-GGUF and below it, a specific filename to download, such as: pandalyst_13b_v1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Pandalyst_13B_V1.0-GGUF pandalyst_13b_v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Pandalyst_13B_V1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pandalyst_13B_V1.0-GGUF pandalyst_13b_v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m pandalyst_13b_v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Pandalyst_13B_V1.0-GGUF", model_file="pandalyst_13b_v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Yanzhao Zheng's Pandalyst 13B V1.0
## Pandalyst: A large language model for mastering data analysis using pandas
<p align="center">
<img src="https://raw.githubusercontent.com/zhengyanzhao1997/Pandalyst/master/imgs/pandalyst.png" width="300"/>
</p>
<p align="center">
๐ฑ <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github Repo</a> <br>
</p>
**What is Pandalyst**
- Pandalyst is a general large language model specifically trained to process and analyze data using the pandas library.
**How is Pandalyst**
- Pandalyst has strong generalization capabilities for data tables in different fields and different data analysis needs.
**Why is Pandalyst**
- Pandalyst is open source and free to use, and its small parameter size (7B/13B) allows us to easily deploy it on local PC.
- Pandalyst can handle complex data tables (multiple columns and multiple rows), allowing us to enter enough context to describe our table in detail.
- Pandalyst has very competitive performance, significantly outperforming models of the same size and even outperforming some of the strongest closed-source models.
## News
- ๐ฅ[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **Pandalyst-13B-V1.0**, **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
- ๐ฅ[2023/09/28] We released **Pandalyst-13B-V1.0** , which was trained on **WizardCoder-Python-13B-V1.0** and achieves the **70.7 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
| Model | Checkpoint | Base Model | PandaTest_V1.0 | EASY | HARD | License |
|--------------------|---------------------------------------------------------------------------------------------|------------|----------------|---------------------|---------------------| ----- |
| Pandalyst-13B-V1.0 | ๐ค <a href="https://huggingface.co/pipizhao/Pandalyst_13B_V1.0" target="_blank">HF Link</a> | WizardCoder-Python-13B-V1.0 | 70.7 | 75.6 | 65.9 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| Pandalyst-7B-V1.1 | ๐ค <a href="https://huggingface.co/pipizhao/Pandalyst-7B-V1.1" target="_blank">HF Link</a> | CodeLlama-7b-Python | 76.1 | 85.2 | 67.0 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
## Usage and Human evaluation
Please refer to <a href="https://github.com/zhengyanzhao1997/Pandalyst" target="_blank">Github</a>.
<!-- original-model-card end -->
|
nvidia/nemotron-3-8b-chat-4k-steerlm | nvidia | 2024-02-09T04:59:22Z | 415 | 16 | nemo | [
"nemo",
"nvidia",
"nemotron-3",
"8B",
"text-generation",
"en",
"ar",
"az",
"bg",
"bn",
"ca",
"cs",
"da",
"de",
"el",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ka",
"kk",
"kn",
"ko",
"lt",
"lv",
"mk",
"ml",
"mr",
"ne",
"nl",
"no",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sq",
"sr",
"sv",
"ta",
"te",
"tr",
"uk",
"ur",
"vi",
"ja",
"zh",
"arxiv:2310.05344",
"license:other",
"region:us"
]
| text-generation | 2023-11-15T03:11:37Z | ---
license: other
license_name: nv-ai-foundation-models-license
license_link: https://developer.nvidia.com/downloads/nv-ai-foundation-models-license
library_name: nemo
extra_gated_heading: Access Nemotron 3 8B on Hugging Face
extra_gated_description: >-
To download this model, you must agree to the terms of the [NVIDIA AI Foundation Models Community License Agreement](https://developer.nvidia.com/downloads/nv-ai-foundation-models-license).
extra_gated_fields:
I agree to share my name, email address and username with NVIDIA: checkbox
geo: ip_location
language:
- "en"
- "ar"
- "az"
- "bg"
- "bn"
- "ca"
- "cs"
- "da"
- "de"
- "el"
- "es"
- "et"
- "fa"
- "fi"
- "fr"
- "gl"
- "he"
- "hi"
- "hr"
- "hu"
- "hy"
- "id"
- "is"
- "it"
- "ka"
- "kk"
- "kn"
- "ko"
- "lt"
- "lv"
- "mk"
- "ml"
- "mr"
- "ne"
- "nl"
- "no"
- "pl"
- "pt"
- "ro"
- "ru"
- "sk"
- "sl"
- "sq"
- "sr"
- "sv"
- "ta"
- "te"
- "tr"
- "uk"
- "ur"
- "vi"
- "ja"
- "zh"
pipeline_tag: text-generation
inference: false
fine-tuning: true
tags:
- nvidia
- nemotron-3
- 8B
---
# Nemotron-3-8B-Chat-4k-SteerLM
## Model Overview
### License
The use of this model is governed by the [NVIDIA AI Foundation Models Community License Agreement](https://developer.nvidia.com/downloads/nv-ai-foundation-models-license).
### Description
Nemotron-3-8B-SteerLM is an 8 billion parameter generative language model instruct-tuned on an 8B base model. It takes input with context length up to 4,096 tokens. The model has been customized using the [SteerLM method](https://arxiv.org/abs/2310.05344) developed by NVIDIA to allow for user control of model outputs during inference.
Key capabilities enabled by SteerLM:
- Dynamic steering of responses by specifying desired attributes like quality, helpfulness, and toxicity at inference time.
- Simplified training compared to RLHF techniques like fine-tuning and bootstrapping.
Nemotron-3-8B-SteerLM is part of Nemotron-3, which is a family of enterprise ready generative text models compatible with [NVIDIA NeMo Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/). For other models in this collection, see the [collections page](https://huggingface.co/collections/nvidia/nemotron-3-8b-6553adeb226f6ab4ffc356f9)
NVIDIA NeMo is an end-to-end, cloud-native platform to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. To get access to NeMo Framework, please sign up at [this link](https://developer.nvidia.com/nemo-framework/join).
### References
[Announcement Blog](https://developer.nvidia.com/blog/nvidia-ai-foundation-models-build-custom-enterprise-chatbots-and-co-pilots-with-production-ready-llms/)
### Model Architecture
**Architecture Type:** Transformer
**Network Architecture:** Generative Pre-Trained Transformer (GPT-3)
The SteerLM method involves the following key steps:
1. Train an attribute prediction model on human annotated data to evaluate response quality.
2. Use this model to annotate diverse datasets and enrich training data.
3. Perform conditioned fine-tuning to align responses with specified combinations of attributes.
4. (Optionally) Bootstrap training through model sampling and further fine-tuning.
SteerLM-8B applies this technique on top of the open-source NVIDIA GPT model architecture. It was pretrained on internet-scale data and then customized using [OASST](https://huggingface.co/datasets/OpenAssistant/oasst1), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), [Light](https://github.com/facebookresearch/ParlAI/blob/9974b947fb2e801dc5608f495828532c2a714742/parlai/tasks/light_dialog/build.py#L14), a subset of permissive licensed [OpenPlatypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), and some internally collected SFT data.
### Prompt Format
#### Single Turn
```text
<extra_id_0>System
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
<extra_id_1>User
{prompt 1}
<extra_id_1>Assistant
<extra_id_2>quality:4,understanding:4,correctness:4,coherence:4,complexity:4,verbosity:4,toxicity:0,humor:0,creativity:0,violence:0,helpfulness:4,not_appropriate:0,hate_speech:0,sexual_content:0,fails_task:0,political_content:0,moral_judgement:0,lang:en
```
#### Multi-Turn or Few-shot
```text
<extra_id_0>System
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
<extra_id_1>User
{prompt 1}
<extra_id_1>Assistant
<extra_id_2>quality:4,understanding:4,correctness:4,coherence:4,complexity:4,verbosity:4,toxicity:0,humor:0,creativity:0,violence:0,helpfulness:4,not_appropriate:0,hate_speech:0,sexual_content:0,fails_task:0,political_content:0,moral_judgement:0,lang:en
{response 1}
<extra_id_1>User
{prompt 2}
<extra_id_1>Assistant
<extra_id_2>quality:4,understanding:4,correctness:4,coherence:4,complexity:4,verbosity:4,toxicity:0,humor:0,creativity:0,violence:0,helpfulness:4,not_appropriate:0,hate_speech:0,sexual_content:0,fails_task:0,political_content:0,moral_judgement:0,lang:en
```
#### Example prompt formation code
```python
PROMPT_TEMPLATE = """<extra_id_0>System
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
<extra_id_1>User
{prompt}
<extra_id_1>Assistant
<extra_id_2>quality:4,understanding:4,correctness:4,coherence:4,complexity:4,verbosity:4,toxicity:0,humor:0,creativity:0,violence:0,helpfulness:4,not_appropriate:0,hate_speech:0,sexual_content:0,fails_task:0,political_content:0,moral_judgement:0,lang:en"""
question = "Write a poem on NVIDIA in the style of Shakespeare"
prompt = PROMPT_TEMPLATE.format(prompt=question)
print(prompt)
```
Each of the properties (e.g. humor, toxicityโฆ) can receive integer values in the range `[0,4]`.
### Software Integration
**Runtime Engine(s):**
NVIDIA AI Enterprise
**Toolkit:**
NeMo Framework
To get access to NeMo Framework, please sign up at [this link](https://developer.nvidia.com/nemo-framework/join). See [NeMo inference container](https://registry.ngc.nvidia.com/orgs/ea-bignlp/teams/ga-participants/containers/nemofw-inference) documentation for details on how to setup and deploy an inference server with NeMo.
**Sample Inference Code:**
```python
from nemo.deploy import NemoQuery
# In this case, we run inference on the same machine
nq = NemoQuery(url="localhost:8000", model_name="Nemotron-3-8B-Chat-4K-RLHF")
# See above for prompt format
output = nq.query_llm(prompts=[prompt], max_output_token=200, top_k=1, top_p=0.0, temperature=0.1)
# NOTE: Chat models require post-processing the output since the `NemoQuery` API
# does not support stopping generation on the special <extra_id_1> token.
output = [[s.split("<extra_id_1>", 1)[0].strip() for s in out] for out in output]
print(output)
```
**Supported Hardware:**
- H100
- A100 80GB, A100 40GB
### Model Version(s)
`Nemotron-3-8B-chat-4k-steerlm-BF16-1`
## Dataset
NVIDIA models are trained on a diverse set of public and proprietary datasets. NVIDIA is committed to the responsible development of large language models and conducts reviews of all datasets included in training.
## Evaluation
MT Bench Score
| **Category** | **Score** |
|---------------------|------------------|
| Total | 5.6 |
| Writing | 6.35 |
| Roleplay | 6.9 |
| Extraction | 5.25 |
| Stem | 7.5 |
| Humanities | 9.02 |
| Reasoning | 4.9 |
| Math | 2.0 |
| Coding | 2.9 |
## Intended use
The 8B-Chat-SteerLM model is for users who want to customize a modelโs response during inference.
### Ethical use
Technology can have a profound impact on people and the world, and NVIDIA is committed to enabling trust and transparency in AI development. NVIDIA encourages users to adopt principles of AI ethics and trustworthiness to guide your business decisions by following the guidelines in the [NVIDIA AI Foundation Models Community License Agreement](https://developer.nvidia.com/downloads/nv-ai-foundation-models-license).
## Limitations
- The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
- The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|
jzli/Aniverse | jzli | 2023-11-23T09:02:41Z | 415 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-23T08:37:29Z | Entry not found |
TheBloke/openchat-3.5-1210-starling-slerp-GGUF | TheBloke | 2023-12-26T12:09:24Z | 415 | 4 | transformers | [
"transformers",
"gguf",
"mistral",
"merge",
"en",
"base_model:SanjiWatsuki/openchat-3.5-1210-starling-slerp",
"license:cc-by-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-26T11:15:43Z | ---
base_model: SanjiWatsuki/openchat-3.5-1210-starling-slerp
inference: false
language:
- en
license: cc-by-4.0
model_creator: Sanji Watsuki
model_name: OpenChat 3.5 1210 Starling SLERP
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- merge
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenChat 3.5 1210 Starling SLERP - GGUF
- Model creator: [Sanji Watsuki](https://huggingface.co/SanjiWatsuki)
- Original model: [OpenChat 3.5 1210 Starling SLERP](https://huggingface.co/SanjiWatsuki/openchat-3.5-1210-starling-slerp)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sanji Watsuki's OpenChat 3.5 1210 Starling SLERP](https://huggingface.co/SanjiWatsuki/openchat-3.5-1210-starling-slerp).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF)
* [Sanji Watsuki's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SanjiWatsuki/openchat-3.5-1210-starling-slerp)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openchat-3.5-1210-starling-slerp.Q2_K.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [openchat-3.5-1210-starling-slerp.Q3_K_S.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [openchat-3.5-1210-starling-slerp.Q3_K_M.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [openchat-3.5-1210-starling-slerp.Q3_K_L.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [openchat-3.5-1210-starling-slerp.Q4_0.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openchat-3.5-1210-starling-slerp.Q4_K_S.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [openchat-3.5-1210-starling-slerp.Q4_K_M.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [openchat-3.5-1210-starling-slerp.Q5_0.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openchat-3.5-1210-starling-slerp.Q5_K_S.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [openchat-3.5-1210-starling-slerp.Q5_K_M.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [openchat-3.5-1210-starling-slerp.Q6_K.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [openchat-3.5-1210-starling-slerp.Q8_0.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-starling-slerp-GGUF/blob/main/openchat-3.5-1210-starling-slerp.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/openchat-3.5-1210-starling-slerp-GGUF and below it, a specific filename to download, such as: openchat-3.5-1210-starling-slerp.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/openchat-3.5-1210-starling-slerp-GGUF openchat-3.5-1210-starling-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/openchat-3.5-1210-starling-slerp-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openchat-3.5-1210-starling-slerp-GGUF openchat-3.5-1210-starling-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m openchat-3.5-1210-starling-slerp.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./openchat-3.5-1210-starling-slerp.Q4_K_M.gguf", # Download the model file first
n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./openchat-3.5-1210-starling-slerp.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sanji Watsuki's OpenChat 3.5 1210 Starling SLERP
<!-- header start -->
# Model Description
This model uses the `Slerp` merge method from 2 models:
1. [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
2. [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
- base model: [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
I SLERPed these two together because they're both OpenChat-ish models. Fundamentally, OpenChat-3.5-1210 appears to be trained similarly to OpenChat-3.5 but now with [Feedback-Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)
and [a de-contaminated Capybara](https://huggingface.co/datasets/LDJnr/Capybara). Starling is OpenChat-3.5 but trained with a novel training method on the Nectar set.
My hope is that a SLERP between the two retains the benefits of both.
The yaml config file for this model is here:
```yaml
slices:
- sources:
- model: openchat/openchat-3.5-1210
layer_range: [0, 32]
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 32]
merge_method: slerp
base_model: openchat/openchat-3.5-1210
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
<!-- original-model-card end -->
|
treytinnell/trained_english_to_syslog | treytinnell | 2024-02-21T21:38:47Z | 415 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-19T22:39:03Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: treytinnell/english_to_syslog
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# treytinnell/english_to_syslog
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5002 | 1.0 | 64 | 0.3738 |
| 0.3482 | 2.0 | 128 | 0.3437 |
| 0.3241 | 3.0 | 192 | 0.3215 |
| 0.2909 | 4.0 | 256 | 0.3315 |
| 0.2081 | 5.0 | 320 | 0.3572 |
| 0.1918 | 6.0 | 384 | 0.3725 |
| 0.1797 | 7.0 | 448 | 0.3701 |
| 0.2006 | 8.0 | 512 | 0.4088 |
| 0.1447 | 9.0 | 576 | 0.4181 |
| 0.1014 | 10.0 | 640 | 0.4244 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ven1228/5DfRPKVqdUoV8HmruBRQM7gk9tmSKscBymGhzteqd4KmMART_vgg | ven1228 | 2024-03-11T12:46:09Z | 415 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-05T05:42:21Z | Entry not found |
mradermacher/Borealis-10.7B-GGUF | mradermacher | 2024-05-06T06:17:59Z | 415 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Undi95/Borealis-10.7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-09T13:57:56Z | ---
base_model: Undi95/Borealis-10.7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
static quants of https://huggingface.co/Undi95/Borealis-10.7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Borealis-10.7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.IQ4_XS.gguf) | IQ4_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q5_K_S.gguf) | Q5_K_S | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q6_K.gguf) | Q6_K | 9.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Borealis-10.7B-GGUF/resolve/main/Borealis-10.7B.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Weni/ZeroShot-3.3.34-Mistral-7b-Multilanguage-3.3.0-merged-v2 | Weni | 2024-03-15T19:58:07Z | 415 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-03-15T19:25:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF | mradermacher | 2024-05-06T05:07:30Z | 415 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Aeala/GPT4-x-AlpacaDente-30b",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-08T18:50:14Z | ---
base_model: Aeala/GPT4-x-AlpacaDente-30b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Aeala/GPT4-x-AlpacaDente-30b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente-30b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adowu/astral-256k-7b-v2 | adowu | 2024-04-10T04:59:02Z | 415 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"astral",
"256k",
"long",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-10T04:16:50Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- astral
- 256k
- long
- mistral
---
### ASTRAL-256k-7b-v2
The adowu/astral-256k-7b-v2 is a cutting-edge language model developed on the MistralForCausalLM architecture, designed for advanced causal language modeling tasks. This model stands out for its ability to understand and generate text with remarkable depth and context awareness, making it highly effective for a wide range of natural language processing (NLP) applications.
## Key Features
- Advanced Architecture: Utilizes the MistralForCausalLM framework, enabling efficient and effective text processing and generation.
- Large Model Scale: Equipped with a substantial model size, it captures and processes a vast amount of information, enhancing its understanding and generation capabilities.
- Extended Sequence Handling: Capable of managing exceptionally long sequences, this model excels in tasks requiring extensive contextual information.
## Performance and Efficiency
Optimized for high performance, the model employs techniques to balance computational efficiency with output precision. This optimization ensures it can be deployed effectively across various platforms, including those supporting bfloat16 computations, without significant loss in the quality of generated text.
## Application Potential
The model's sophisticated understanding and text generation capabilities make it ideal for several advanced applications:
- Content Generation: From articles and reports to creative writing, it can produce coherent and contextually rich content.
- Conversational Systems: Powers chatbots and virtual assistants, facilitating deep and meaningful interactions over extended conversations.
- Complex Language Understanding Tasks: Excellently performs in summarization, translation, and other tasks over large documents, showcasing its ability to handle detailed and nuanced language understanding.
- **Developed by:** aww
- **Model type:** Mistral |
LiteLLMs/Ko-Llama3-Luxia-8B-GGUF | LiteLLMs | 2024-05-28T08:33:12Z | 415 | 0 | null | [
"gguf",
"saltlux",
"luxia",
"meta",
"llama-3",
"pytorch",
"GGUF",
"text-generation",
"en",
"ko",
"license:llama3",
"region:us"
]
| text-generation | 2024-05-07T23:56:49Z |
---
language:
- en
- ko
license: llama3
tags:
- saltlux
- luxia
- meta
- llama-3
- pytorch
- GGUF
pipeline_tag: text-generation
quantized_by: andrijdavid
---
# Ko-Llama3-Luxia-8B-GGUF
- Original model: [Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Ko-Llama3-Luxia-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Ko-Llama3-Luxia-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Ko-Llama3-Luxia-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Ko-Llama3-Luxia-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Ko-Llama3-Luxia-8B
# Model Details
Saltlux, AI Labs ์ธ์ด๋ชจ๋ธํ์์ ํ์ต ๋ฐ ๊ณต๊ฐํ <b>Ko-Llama3-Luxia-8B</b> ๋ชจ๋ธ์ Meta์์ ์ถ์ํ Llama-3-8B ๋ชจ๋ธ์ <b>ํ๊ตญ์ด์ ํนํ</b>ํ ๋ชจ๋ธ์
๋๋ค.<br><br>
์์ฒด ๋ณด์ ํ๊ณ ์๋ 1TB ์ด์์ ํ๊ตญ์ด ํ์ต ๋ฐ์ดํฐ ์ค, ์ฝ 100GB ์ ๋์ ๋ฐ์ดํฐ๋ฅผ ์ ๋ณํ์ฌ ์ฌ์ ํ์ต์ ํ์ฉํ์์ต๋๋ค.<br><br>
๋ํ ๊ณต๊ฐ๋ Llama-3 Tokenizer๋ฅผ ํ๊ตญ์ด๋ก ํ์ฅํ๊ณ ์ฌ์ ํ์ต์ ํ์ฉํ์ต๋๋ค.
- **Meta Llama-3:** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
- **License:** Llama3 License [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
### Intended Use
Ko-Llama3-Luxia-8B๋ ์ฐ๊ตฌ์ฉ์ผ๋ก ์ ์๋์์ผ๋ฉฐ, ๋ค์ํ ์์ฐ์ด ์์ฑ ํ์คํฌ๋ฅผ ์ํด ์์ ๋กญ๊ฒ ํ์ต ๋ฐ ํ์ฉํ ์ ์์ต๋๋ค.
### How to Use
ํด๋น ๋ชจ๋ธ ์นด๋์๋ `Ko-Llama3-Luxia-8B` ๋ชจ๋ธ๊ณผ transformers ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ธฐ๋ฐ์ ์์ ์ฝ๋๋ฅผ ์ ๊ณตํฉ๋๋ค.
```
import transformers
import torch
model_id = "saltlux/Ko-Llama3-Luxia-8B"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("<|begin_of_text|>์๋
ํ์ธ์. ์ํธ๋ฃฉ์ค AI Labs ์
๋๋ค.")
```
# Training Details
ํ๊ตญ์ด ํนํ๋ฅผ ์ํ ์ฌ์ ํ์ต ๋ฐ์ดํฐ๋ Saltlux์์ ๋ณด์ ํ ๋ด์ค, ๋ฒ๋ฅ , ํนํ, ์๋ฃ, ์ญ์ฌ, ์ฌํ, ๋ฌธํ, ๋ํ(๋ฌธ์ด/๊ตฌ์ด) ๋ฑ์ ๋๋ฉ์ธ์ผ๋ก ๊ตฌ์ฑ๋ 100GB ์์ค์ ์ฝํผ์ค(~2023๋
)๋ฅผ ํ์ฉํ์์ต๋๋ค.<br>
- ํ์ฌ ์ ๊ณต๋๋ ๋ชจ๋ธ์ 1 Epoch ํ์ต๋ ๋ชจ๋ธ์
๋๋ค.<br>
### Use Device
์ฌ์ ํ์ต์ NVIDIA H100 80GB * 8EA ์ฅ๋น๋ฅผ ํ์ฉํ์ฌ ์งํํ์์ต๋๋ค.
#### Training Hyperparameters
<table>
<tr>
<td><strong>Model</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Learning rate</strong>
</td>
<td><strong>Batch</strong>
</td>
<td><strong>Precision</strong>
</td>
</tr>
<tr>
<td>Ko-Llama3-Luxia-8B
</td>
<td>8B
</td>
<td>8k
</td>
<td>yes
</td>
<td>1e-5
</td>
<td>128
</td>
<td>bf16
</td>
</tr>
</table>
### Tokenizer
Llama-3-Tokenizer๋ฅผ ํ๊ตญ์ด ํนํํ๊ธฐ ์ํด ํ๊ตญ์ด ํ ํฐ 17,536๊ฐ๋ฅผ ์ถ๊ฐํ๊ณ ํ์ฉํ์์ต๋๋ค.
<table>
<tr>
<td><strong>Model</strong>
</td>
<td><strong>Vocab Size</strong>
</td>
</tr>
<tr>
<td>Llama-3
</td>
<td>128,256
</td>
</tr>
<tr>
<td>Ko-Llama3-Luxia-8B
</td>
<td>145,792
</td>
</tr>
</table>
### Tokenizer Result
+ Ko
<table>
<tr>
<td><strong>์
๋ ฅ</strong>
</td>
<td><strong>Llama-3</strong>
</td>
<td><strong>Ko-Llama3-Luxia-8B</strong>
</td>
</tr>
<tr>
<td>์์ฆ ๋ ์จ๊ฐ ๋๋ฌด ์ค๋ฝ๊ฐ๋ฝํด์ ์์ง๋ ๊ฒจ์ธ์ท์ ๋ชป์น์ ์ด์..
</td>
<td>['์', '์ฆ', ' ๋ ', '์จ', '๊ฐ', ' ๋๋ฌด', ' ์ค', '๋ฝ', '๊ฐ', '๋ฝ', 'ํด์', ' ์์ง', '๋', ' ๊ฒจ', '์ธ', '๏ฟฝ', '๏ฟฝ', '์', ' ๋ชป', '์น', '์ ', '์ด์', '..']
</td>
<td>['์์ฆ', ' ๋ ์จ', '๊ฐ', ' ๋๋ฌด', ' ์ค๋ฝ', '๊ฐ๋ฝ', 'ํด์', ' ์์ง', '๋', ' ๊ฒจ์ธ', '์ท', '์', ' ๋ชป', '์น', '์ ', '์ด์', '..']
</td>
</tr>
<tr>
<td>๋ง์๋ ๋ฐฅ์ ๋์
จ์ต๋๊น? ๋ง์ด ๊ถ๊ธํ๋ค์.
</td>
<td>['๋ง', '์๋', ' ๏ฟฝ', '๏ฟฝ', '์', ' ๋', '์
จ', '์ต', '๋๊น', '?', ' ๋ง', '์ด', ' ๊ถ๊ธ', 'ํ', '๋ค์', '.']
</td>
<td>['๋ง', '์๋', ' ๋ฐฅ', '์', ' ๋์
จ', '์ต', '๋๊น', '?', ' ๋ง', '์ด', ' ๊ถ๊ธ', 'ํ', '๋ค์', '.']
</td>
</tr>
<tr>
<td>๋๋ฒ์๋ถํฐ ํ๊ธ์ฌ ํ๋ก๊น์ง ์ํ๋ ํ๋ก๋ฅผ ์ฐพ๋ ๊ฐ์ฅ ๋น ๋ฅธ ๋ฐฉ๋ฒ - ์๋ฉด ๊ฒ์, ์์ฒญ ํ๋ก, ์ ์ฌ ํ๋ก, AI ์ถ์ฒ, ํ๋ก ๋ฐ ๋ฒ๋ น ๊ฒ์.
</td>
<td>['๋', '๋ฒ', '์', '๋ถํฐ', ' ํ', '๊ธ', '์ฌ', ' ํ', '๋ก', '๊น์ง', ' ์', 'ํ๋', ' ํ', '๋ก', '๋ฅผ', ' ์ฐพ', '๋', ' ๊ฐ์ฅ', ' ๋น ', '๋ฅธ', ' ๋ฐฉ๋ฒ', ' -', ' ์', '๋ฉด', ' ๊ฒ์', ',', ' ์์ฒญ', ' ํ', '๋ก', ',', ' ์ ', '์ฌ', ' ํ', '๋ก', ',', ' AI', ' ์ถ์ฒ', ',', ' ํ', '๋ก', ' ๋ฐ', ' ๋ฒ', '๋ น', ' ๊ฒ์', '.']
</td>
<td>['๋', '๋ฒ', '์', '๋ถํฐ', ' ํ', '๊ธ', '์ฌ', ' ํ๋ก', '๊น์ง', ' ์', 'ํ๋', ' ํ๋ก', '๋ฅผ', ' ์ฐพ', '๋', ' ๊ฐ์ฅ', ' ๋น ๋ฅธ', ' ๋ฐฉ๋ฒ', ' -', ' ์๋ฉด', ' ๊ฒ์', ',', ' ์์ฒญ', ' ํ๋ก', ',', ' ์ ์ฌ', ' ํ๋ก', ',', ' AI', ' ์ถ์ฒ', ',', ' ํ๋ก', ' ๋ฐ', ' ๋ฒ๋ น', ' ๊ฒ์', '.']
</td>
</tr>
<tr>
<td>๋ณธ ๋ฐ๋ช
์ ๊ธ์ํ์ ๋ค์ ๋ถ๋ถ์ ์์นญ์์ผ ํน์ ๋ฌด๋ฌ๋ชจ์์ ํ์ฑํ๋ ๊ฑด์ถ์ฉ ๊ธ์์ฌ ์ฅ์ํ์ผ๋ก ์ด๋ฃจ์ด์ง ๊ฒ์ ํน์ง์ด ์๋ค.
</td>
<td>['๋ณธ', ' ๋ฐ', '๋ช
', '์', ' ๊ธ', '์', 'ํ', '์', ' ๋ค', '์', ' ๋ถ๋ถ', '์', ' ์', '์นญ', '์', '์ผ', ' ํน', '์ ', ' ๋ฌด', '๏ฟฝ', '๏ฟฝ', '๋ชจ', '์', '์', ' ํ', '์ฑ', 'ํ๋', ' ๊ฑด', '์ถ', '์ฉ', ' ๊ธ', '์', '์ฌ', ' ์ฅ', '์', 'ํ', '์ผ๋ก', ' ์ด๋ฃจ', '์ด์ง', ' ๊ฒ', '์', ' ํน', '์ง', '์ด', ' ์๋ค', '.']
</td>
<td>['๋ณธ', ' ๋ฐ๋ช
', '์', ' ๊ธ์', 'ํ', '์', ' ๋ค์', ' ๋ถ๋ถ', '์', ' ์์นญ', '์', '์ผ', ' ํน์ ', ' ๋ฌด๋ฌ', '๋ชจ', '์', '์', ' ํ์ฑ', 'ํ๋', ' ๊ฑด์ถ', '์ฉ', ' ๊ธ์', '์ฌ', ' ์ฅ์', 'ํ', '์ผ๋ก', ' ์ด๋ฃจ์ด์ง', ' ๊ฒ', '์', ' ํน์ง', '์ด', ' ์๋ค', '.']
</td>
</tr>
<tr>
<td>๊ณจ๋ค๊ณต์ฆ์ ์ ์๊ธฐ๋๊ฑฐ์์? ๊ทธ๋ฆฌ๊ณ ์น๋ฃํ๋ ค๋ฉด ์ด๋ป๊ฒํด์ผํ์ฃ ?
</td>
<td>['๊ณจ', '๋ค', '๊ณต', '์ฆ', '์', ' ์', ' ์', '๊ธฐ๋', '๊ฑฐ', '์', '์', '?', ' ๊ทธ๋ฆฌ๊ณ ', ' ์น', '๋ฃ', 'ํ๋ ค', '๋ฉด', ' ์ด๋ป๊ฒ', 'ํด์ผ', 'ํ', '์ฃ ', '?']
</td>
<td>['๊ณจ', '๋ค', '๊ณต์ฆ', '์', ' ์', ' ์', '๊ธฐ๋', '๊ฑฐ', '์', '์', '?', ' ๊ทธ๋ฆฌ๊ณ ', ' ์น๋ฃ', 'ํ๋ ค', '๋ฉด', ' ์ด๋ป๊ฒ', 'ํด์ผ', 'ํ', '์ฃ ', '?']
</td>
</tr>
</table>
+ En
<table>
<tr>
<td><strong>์
๋ ฅ</strong>
</td>
<td><strong>Llama-3</strong>
</td>
<td><strong>Ko-Llama3-Luxia-8B</strong>
</td>
</tr>
<tr>
<td>Korean cuisine, hanguk yori, or hansik, has evolved through centuries of social and political change.
</td>
<td>['K', 'orean', ' cuisine', ',', ' h', 'angu', 'k', ' y', 'ori', ',', ' or', ' hans', 'ik', ',', ' has', ' evolved', ' through', ' centuries', ' of', ' social', ' and', ' political', ' change', '.']
</td>
<td>['K', 'orean', ' cuisine', ',', ' h', 'angu', 'k', ' y', 'ori', ',', ' or', ' hans', 'ik', ',', ' has', ' evolved', ' through', ' centuries', ' of', ' social', ' and', ' political', ' change', '.']
</td>
</tr>
<tr>
<td>Son Heung-min is a South Korean professional footballer who plays as a forward for and captains both Premier League club Tottenham Hotspur and the South Korea national team.
</td>
<td>['Son', ' He', 'ung', '-min', ' is', ' a', ' South', ' Korean', ' professional', ' football', 'er', ' who', ' plays', ' as', ' a', ' forward', ' for', ' and', ' captains', ' both', ' Premier', ' League', ' club', ' Tottenham', ' Hot', 'sp', 'ur', ' and', ' the', ' South', ' Korea', ' national', ' team', '.']
</td>
<td>['Son', ' He', 'ung', '-min', ' is', ' a', ' South', ' Korean', ' professional', ' football', 'er', ' who', ' plays', ' as', ' a', ' forward', ' for', ' and', ' captains', ' both', ' Premier', ' League', ' club', ' Tottenham', ' Hot', 'sp', 'ur', ' and', ' the', ' South', ' Korea', ' national', ' team', '.']
</td>
</tr>
</table>
### Inference Result
<table>
<tr>
<td><strong>Llama-3</strong>
</td>
<td><strong>Ko-Llama3-Luxia-8B</strong>
</td>
</tr>
<tr>
<td><b>์บ๋ฆฌ๋น์๋ฒ ์ด๋ ์ฉ์ธ์</b> ์์นํ ์บ๋ฆฌ๋น์ ๋ฒ ์ด์ ๊ณต์ ์จ๋ผ์ธ ์นด์ง๋
ธ์
๋๋ค. ์บ๋ฆฌ๋น์ ๋ฒ ์ด๋ 1994๋
๋ถํฐ ์ด์๋์ด ์์ผ๋ฉฐ, 2005๋
์ ์จ๋ผ์ธ์ผ๋ก ์คํํ์ต๋๋ค. ์ด ์ฌ์ดํธ๋ ์๊ตญ์์ ์ด์๋๋ ๋ผ์ด์ผ์ค๋ฅผ ๊ฐ์ง๊ณ ์์ผ๋ฉฐ, ์๊ตญ ์ ๋ถ๊ฐ ์น์ธํ๊ณ ์์ต๋๋ค.\n์บ๋ฆฌ๋น์ ๋ฒ ์ด๋ ๋ค์ํ ๊ฒ์์ ์ ๊ณตํฉ๋๋ค. ์ฌ๋กฏ ๋จธ์ , ๋ธ๋์ญ, ๋ฃฐ๋ , ํฌ์ปค, ๋ฐ์นด๋ผ ๋ฑ์ด ์์ต๋๋ค. ๋ํ, ์บ๋ฆฌ๋น์ ๋ฒ ์ด๋ ๋ค์ํ ๋ณด๋์ค์ ํ๋ก๋ชจ์
์ ์ ๊ณตํฉ๋๋ค. ์๋ฅผ ๋ค์ด, ์ ๊ท ํ์์๊ฒ๋ ๊ฐ์
์ 100% ๋ฐฐํ
๋ณด๋์ค๊ฐ ์ฃผ์ด์ง๋๋ค. ๋ํ, ๋งค์ฃผ๋ง๋ค ๋ค์ํ ์ด๋ฒคํธ์ ํ๋ก๋ชจ์
์ด ์งํ๋ฉ๋๋ค.<์ดํ ์๋ต>
</td>
<td><b>์บ๋ฆฌ๋น์๋ฒ ์ด๋ ์ฉ์ธ์</b> ์๋ ๋์ด๊ณต์์ธ๋ฐ์. ์บ๋ฆฌ๋น์๋ฒ ์ด์ ๊ฒฝ์ฐ์๋ ์ํฐํํฌ์ ๋์ด๊ณต์์ด ๊ฐ์ด ์์ต๋๋ค. ๊ทธ๋์ ์ฌ๋ฆ์๋ ๋ฌผ๋์ด๋ฅผ ์ฆ๊ธฐ๊ณ ๊ฒจ์ธ์๋ ๋์ฐ๋งค๋ฅผ ํ ์ ์์ด์ ์ฌ๊ณ์ ๋ด๋ด ์ฆ๊ธธ ์ ์๋ ๊ณณ์ด์ฃ .\n์บ๋ฆฌ๋น์๋ฒ ์ด๋ฅผ ์ด์ฉํ๋ ค๋ฉด ์ฐ์ ํํ์ด์ง์์ ์
์ฅ๊ถ์ ๊ตฌ๋งคํด์ผ ํฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ ์บ๋ฆฌ๋น์๋ฒ ์ด์ ๊ฒฝ์ฐ์๋ 1๋
๋์ ์์ ๋กญ๊ฒ ์ด์ฉํ ์ ์๋ ์ฐ๊ฐํ์๊ถ๋ ํ๋งคํ๋๋ฐ์. ์ฐ๊ฐํ์๊ถ์ ์ฑ์ธ๊ณผ ์ด๋ฆฐ์ด ๋ชจ๋ ๊ตฌ๋งค๊ฐ ๊ฐ๋ฅํฉ๋๋ค.<์ดํ ์๋ต>
</td>
</tr>
<tr>
<td><b>๋ฒ ํธ๋จ ์๊ตญ์๋</b> ํ๊ตญ์ ๊ตญ๋ฐฅ๊ณผ ๋น์ทํ ์์์
๋๋ค. ๋ฒ ํธ๋จ ์๊ตญ์์๋ ์, ๋ผ์ง๊ณ ๊ธฐ, ๊ณ๋, ์ํ, ๋ง๋, ์๊ฐ, ๊ณ ์ถ ๋ฑ์ด ๋ค์ด๊ฐ๋๋ค.\n1. ์์ ์ป์ด ๋ฌผ์ ๋ด๊ฐ ๋ก๋๋ค.\n2. ๋ผ์ง๊ณ ๊ธฐ๋ฅผ ์๊ฒ ์ฐ๊ณ , ์ํ์ ๋ง๋, ์๊ฐ, ๊ณ ์ถ๋ฅผ ๋ค์ ธ์ค๋๋ค.\n3. ํฌ์ ๊ธฐ๋ฆ์ ๋๋ฅด๊ณ ๋ผ์ง๊ณ ๊ธฐ๋ฅผ ๋ณถ๋ค๊ฐ ์ํ์ ๋ง๋, ์๊ฐ, ๊ณ ์ถ๋ฅผ ๋ฃ์ต๋๋ค.\n4. ๊ณ๋์ ํ์ด์ ๋ฃ๊ณ , ๊ฐ์ฅ์ ๋ฃ์ด ๋ณถ์์ค๋๋ค.\n5. ์์ ๋ฌผ์ ๋ถ๋ ค์ ๋ฃ๊ณ , ๋์ผ๋ฉด ์๊ธ๊ณผ ํ์ถ๋ฅผ ๋ฃ์ด์ค๋๋ค.\n6. ์๊ตญ์๋ฅผ ๋จน๊ธฐ ์ข์ ์ํ๊ฐ ๋๋ฉด ๊บผ๋ด ์ค๋๋ค.\n7. ์๊ตญ์์ ์์ ๋ผ์ง๊ณ ๊ธฐ์ ๊ณ๋์ ์น์ด ๋จน์ต๋๋ค.\nVietnamese rice noodles are similar to Korean rice porridge. Vietnamese rice noodles contain rice, pork, eggs, onions, garlic, ginger and chili peppers.<์ดํ ์๋ต>
</td>
<td><b>๋ฒ ํธ๋จ ์๊ตญ์๋</b> ์ผ๋ฐ์ ์ผ๋ก ์๊ฒ ์ฌ ์๊ฐ๋ฃจ๋ก ๋ง๋ ๋ฉด์ ์ฌ์ฉํฉ๋๋ค. ๋ฉด์ ๋จ๊ฑฐ์ด ๋ฌผ์ ์ถ์์ ์ฐฌ๋ฌผ์ ํน๊ตฌ์ด๋
๋๋ค. ๋ฉด์ด ์ถ์์ง๋ฉด ์ก์์ ์ผ์ฑ๋ฅผ ๋ฃ๊ณ ๋์
๋๋ค. ์ก์๋ฅผ ๋ง๋ค ๋๋ ๋ญ๊ณ ๊ธฐ, ์๊ณ ๊ธฐ, ๋ผ์ง๊ณ ๊ธฐ ๋ฑ ๋ค์ํ ์ฌ๋ฃ๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ผ์ฑ๋ก๋ ๋น๊ทผ, ์ํ, ํ ๋ฑ์ ์ฌ์ฉํ ์ ์์ต๋๋ค.<์ดํ ์๋ต>
</td>
</tr>
<tr>
<td><b>ํ๊ตญ์ ์ ํต์์์ธ</b> ํ๋ณต๊ณผ ์ผ๋ณธ์ ์ ํต์์์ธ ์๋ก์นด๋ฏธ๋ ๋ชจ๋ 5๋๋ฅ์ ๋ฌธํ๋ฅผ ์์ฉํ๊ณ , ๊ฐ๊ธฐ ๋ค๋ฅธ ์ง์ญ์ ํน์ง์ ๋ฐ์ํ ์์์ ๊ฐ์ถ๊ณ ์๋ค. ์ด๋ฌํ ์์์ ํน์ง์ ๊ฐ๊ฐ์ ๊ตญ๊ฐ์์ ๋ฐ์ ํด ์จ ์ญ์ฌ์ ๋ฌธํ์ ๊ธฐ์ดํ๋ค. ํํธ, ํ๊ตญ์ ํ๋ณต๊ณผ ์ผ๋ณธ์ ์๋ก์นด๋ฏธ๋ ์๋ก ๋น์ทํ ํํ๋ฅผ ๊ฐ์ง๊ณ ์์ง๋ง, ๊ทธ ์๋ฏธ๋ ๋ค๋ฅด๋ค. ํ๋ณต์ ํ๊ตญ์ธ์ ์ ์ฒด์ฑ์ ๋ํ๋ด๋ฉฐ, ์๋ก์นด๋ฏธ๋ ์ผ๋ณธ์ธ์ ์ ์ฒด์ฑ์ ๋ํ๋ธ๋ค. ๋ฐ๋ผ์ ์ด ๋ ๊ฐ์ง ์์์ ์๋ก ๋ค๋ฅธ ๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ๊ฐ์ง ์ฌ๋๋ค์ ์ ์ฒด์ฑ ํํ์ ์ฌ์ฉ๋๋ค.\nThe traditional costumes of Korea and Japan are hanbok and yorokami respectively. Both have been influenced by the cultures of other countries and reflect the characteristics of their respective regions. The distinctive features of these costumes are based on the history and culture of each country. However, although hanbok and yorokami share similar forms, they have different meanings. Hanbok represents Korean identity while yorokami represents Japanese identity. <์ดํ ์๋ต>
</td>
<td><b>ํ๊ตญ์ ์ ํต์์์ธ</b> ํ๋ณต์ ํ๊ตญ์ ๋ฌธํ๋ฅผ ๋ํํ๋ ์์ง๋ฌผ์ด๋ค. ํ์ง๋ง ์ต๊ทผ์๋ ํ๋ณต์ ์
๋ ์ฌ๋๋ค์ด ์ ์ ์ค์ด๋ค๊ณ ์๋ค. ์ด๋ ์ฌ๋ฌ ๊ฐ์ง ์ด์ ๊ฐ ์๊ฒ ์ง๋ง, ๊ทธ ์ค ํ๋๋ ํ๋ณต์ด ๋ถํธํ๊ธฐ ๋๋ฌธ์ผ ๊ฒ์ด๋ค. ํ๋ณต์ ์ผ๋ฐ์ ์ธ ์ท๋ณด๋ค ๋ ๋ง์ ๋ถ๋ถ์ ๋ฎ์ด์ผ ํ๊ณ , ์์ง์ด๊ธฐ ์ด๋ ต๋ค. ๋ํ, ํ๋ณต์ ์ธํํ๊ธฐ๊ฐ ์ด๋ ต๊ณ , ๊ด๋ฆฌํ๊ธฐ๋ ์ฝ์ง ์๋ค.\nํ์ง๋ง ํ๋ณต์ ๋จ์ํ ๋ถํธํ๊ณ ๊ด๋ฆฌํ๊ธฐ ์ด๋ ค์ด ์ท์ด ์๋๋ค. ํ๋ณต์ ํ๊ตญ์ธ์ ์ญ์ฌ์ ๋ฌธํ๋ฅผ ๋ด๊ณ ์๋ ์์คํ ๋ฌธํ์ ์ฐ์ด๋ค. ํ๋ณต์ ํ๊ตญ์ ์ ํต๊ณผ ๋ฏธ๋ฅผ ํํํ๋ ์ค์ํ ์๋จ์ด๋ฉฐ, ํ๊ตญ์ ์ ์ฒด์ฑ์ ๋ํ๋ด๋ ์์ง๋ฌผ์ด๋ค. ๋ฐ๋ผ์ ์ฐ๋ฆฌ๋ ํ๋ณต์ ๋ณด์กดํ๊ณ ๊ณ์นํด์ผ ํ๋ค.<์ดํ ์๋ต>
</td>
</tr>
</table>
### Citation instructions
**Ko-Llama3-Luxia-8B**
```
@article{kollama3luxiamodelcard,
title={Ko Llama 3 Luxia Model Card},
author={AILabs@Saltux},
year={2024},
url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md}
}
```
**Original Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
<!-- original-model-card end -->
|
second-state/Yi-1.5-34B-Chat-16K-GGUF | second-state | 2024-06-13T12:22:30Z | 415 | 3 | null | [
"gguf",
"text-generation",
"base_model:01-ai/Yi-1.5-34B-Chat-16K",
"license:other",
"region:us"
]
| text-generation | 2024-05-17T15:33:25Z | ---
base_model: 01-ai/Yi-1.5-34B-Chat-16K
inference: false
license: other
license_link: LICENSE
license_name: yi-license
model_creator: 01-ai
model_name: Yi-1.5-34B-Chat-16
model_type: yi
pipeline_tag: text-generation
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi-1.5-34B-Chat-16K-GGUF
## Original Model
[01-ai/Yi-1.5-34B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K)
## Run with LlamaEdge
- LlamaEdge version: [v0.10.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.10.0) and above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Reverse prompt: `<|im_end|>`
- Context size: `16384`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Yi-1.5-34B-Chat-16K-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template chatml \
--reverse-prompt "<|im_end|>" \
--ctx-size 16384 \
--model-name Yi-1.5-34B-Chat-16K
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Yi-1.5-34B-Chat-16K-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template chatml \
--reverse-prompt "<|im_end|>" \
--ctx-size 16384
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Yi-1.5-34B-Chat-16K-Q2_K.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q2_K.gguf) | Q2_K | 2 |12.8 GB| smallest, significant quality loss - not recommended for most purposes |
| [Yi-1.5-34B-Chat-16K-Q3_K_L.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q3_K_L.gguf) | Q3_K_L | 3 | 18.1 GB| small, substantial quality loss |
| [Yi-1.5-34B-Chat-16K-Q3_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q3_K_M.gguf) | Q3_K_M | 3 | 16.7 GB| very small, high quality loss |
| [Yi-1.5-34B-Chat-16K-Q3_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q3_K_S.gguf) | Q3_K_S | 3 | 15 GB| very small, high quality loss |
| [Yi-1.5-34B-Chat-16K-Q4_0.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q4_0.gguf) | Q4_0 | 4 | 19.5 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-1.5-34B-Chat-16K-Q4_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q4_K_M.gguf) | Q4_K_M | 4 | 20.7 GB| medium, balanced quality - recommended |
| [Yi-1.5-34B-Chat-16K-Q4_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q4_K_S.gguf) | Q4_K_S | 4 | 19.6 GB| small, greater quality loss |
| [Yi-1.5-34B-Chat-16K-Q5_0.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q5_0.gguf) | Q5_0 | 5 | 23.7 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-1.5-34B-Chat-16K-Q5_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q5_K_M.gguf) | Q5_K_M | 5 | 24.3 GB| large, very low quality loss - recommended |
| [Yi-1.5-34B-Chat-16K-Q5_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q5_K_S.gguf) | Q5_K_S | 5 | 23.7 GB| large, low quality loss - recommended |
| [Yi-1.5-34B-Chat-16K-Q6_K.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q6_K.gguf) | Q6_K | 6 | 28.2 GB| very large, extremely low quality loss |
| [Yi-1.5-34B-Chat-16K-Q8_0.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q8_0.gguf) | Q8_0 | 8 | 36.5 GB| very large, extremely low quality loss - not recommended |
| [Yi-1.5-34B-Chat-16K-f16-00001-of-00003.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-f16-00001-of-00003.gguf) | f16 | 16 | 32.2 GB| |
| [Yi-1.5-34B-Chat-16K-f16-00002-of-00003.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-f16-00002-of-00003.gguf) | f16 | 16 | 32.1 GB| |
| [Yi-1.5-34B-Chat-16K-f16-00003-of-00003.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-f16-00003-of-00003.gguf) | f16 | 16 | 4.48 GB| |
*Quantized with llama.cpp b3135*
|
QuantFactory/Qwen2-0.5B-Instruct-GGUF | QuantFactory | 2024-06-18T06:33:07Z | 415 | 0 | null | [
"gguf",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-12T05:18:08Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
base_model: Qwen/Qwen2-0.5B-Instruct
---
# Qwen2-0.5B-Instruct-GGUF
This is quntized version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) created using llama.cpp
## Model Description
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 0.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-0.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation
We briefly compare Qwen2-0.5B-Instruct with Qwen1.5-0.5B-Chat. The results are as follows:
| Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** |
| :--- | :---: | :---: | :---: | :---: |
| MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
| HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
| GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
| C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
| IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
## Original Model Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF | CHE-72 | 2024-06-21T18:48:12Z | 415 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-21T18:47:52Z | ---
base_model: Qwen/Qwen2-7B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -c 2048
```
|
sentence-transformers/distilbert-base-nli-stsb-quora-ranking | sentence-transformers | 2024-03-27T10:19:13Z | 414 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"safetensors",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# sentence-transformers/distilbert-base-nli-stsb-quora-ranking
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/distilbert-base-nli-stsb-quora-ranking')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-base-nli-stsb-quora-ranking')
model = AutoModel.from_pretrained('sentence-transformers/distilbert-base-nli-stsb-quora-ranking')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-base-nli-stsb-quora-ranking)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
jy46604790/Fake-News-Bert-Detect | jy46604790 | 2022-04-26T04:36:13Z | 414 | 12 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-04-24T11:25:53Z | ---
license: apache-2.0
---
# Fake News Recognition
## Overview
This model is trained by over 40,000 news from different medias based on the 'roberta-base'. It can give result by simply entering the text of the news less than 500 words(the excess will be truncated automatically).
LABEL_0: Fake news
LABEL_1: Real news
## Qucik Tutorial
### Download The Model
```python
from transformers import pipeline
MODEL = "jy46604790/Fake-News-Bert-Detect"
clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
```
### Feed Data
```python
text = "Indonesian police have recaptured a U.S. citizen who escaped a week ago from an overcrowded prison on the holiday island of Bali, the jail s second breakout of foreign inmates this year. Cristian Beasley from California was rearrested on Sunday, Badung Police chief Yudith Satria Hananta said, without providing further details. Beasley was a suspect in crimes related to narcotics but had not been sentenced when he escaped from Kerobokan prison in Bali last week. The 32-year-old is believed to have cut through bars in the ceiling of his cell before scaling a perimeter wall of the prison in an area being refurbished. The Kerobokan prison, about 10 km (six miles) from the main tourist beaches in the Kuta area, often holds foreigners facing drug-related charges. Representatives of Beasley could not immediately be reached for comment. In June, an Australian, a Bulgarian, an Indian and a Malaysian tunneled to freedom about 12 meters (13 yards) under Kerobokan prison s walls. The Indian and the Bulgarian were caught soon after in neighboring East Timor, but Australian Shaun Edward Davidson and Malaysian Tee Kok King remain at large. Davidson has taunted authorities by saying he was enjoying life in various parts of the world, in purported posts on Facebook. Kerobokan has housed a number of well-known foreign drug convicts, including Australian Schappelle Corby, whose 12-1/2-year sentence for marijuana smuggling got huge media attention."
```
### Result
```python
result = clf(text)
result
```
output:[{'label': 'LABEL_1', 'score': 0.9994995594024658}] |
ufal/eleczech-lc-small | ufal | 2023-01-12T16:01:09Z | 414 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"Czech",
"Electra",
"รFAL",
"cs",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-04-24T11:32:43Z | ---
language: "cs"
tags:
- Czech
- Electra
- รFAL
license: "cc-by-nc-sa-4.0"
---
# EleCzech-LC model
THe `eleczech-lc-small` is a monolingual small Electra language representation
model trained on lowercased Czech data (but with diacritics kept in place).
It is trained on the same data as the
[RobeCzech model](https://huggingface.co/ufal/robeczech-base).
|
vinai/bartpho-syllable-base | vinai | 2022-10-22T09:00:27Z | 414 | 1 | transformers | [
"transformers",
"pytorch",
"mbart",
"feature-extraction",
"arxiv:2109.09701",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-08-19T14:21:32Z | # <a name="introduction"></a> BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese
The pre-trained model `vinai/bartpho-syllable-base` is the "base" variant of `BARTpho-syllable`, which uses the "base" architecture and pre-training scheme of the sequence-to-sequence denoising model [BART](https://github.com/pytorch/fairseq/tree/main/examples/bart). The general architecture and experimental results of BARTpho can be found in our [paper](https://arxiv.org/abs/2109.09701):
@article{bartpho,
title = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}},
author = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen},
journal = {arXiv preprint},
volume = {arXiv:2109.09701},
year = {2021}
}
**Please CITE** our paper when BARTpho is used to help produce published results or incorporated into other software.
For further information or requests, please go to [BARTpho's homepage](https://github.com/VinAIResearch/BARTpho)!
|
l3cube-pune/marathi-sentence-similarity-sbert | l3cube-pune | 2023-06-11T14:59:15Z | 414 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mr",
"arxiv:2211.11187",
"arxiv:2304.11434",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2022-11-05T18:26:08Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: mr
widget:
- source_sentence: "เคถเฅเคคเคเคฑเฅเคฏเคพเคเคเฅ เคกเฅเคณเฅ เคเคเคพเคถเคพเคเคกเฅ เคฒเคพเคเคฒเฅ เคเคนเฅเคค"
sentences:
- "เคเคคเคพ เคถเฅเคคเคเคฑเฅเคฏเคพเคเคเฅ เคกเฅเคณเฅ เคเคญเคพเคณเคพเคเคกเฅ เคฒเคพเคเคฒเฅ เคเคนเฅเคค"
- "เค
เคจเฅเคจเคงเคพเคจเฅเคฏ เคเคคเฅเคชเคพเคฆเคจเคพเคธเคพเค เฅ เคถเฅเคคเคเคฐเฅ เคเคทเฅเค เคเคฐเคคเคพเคค"
- "เคถเคนเคฐเคพเคค เคเคเคฑเฅเคฏเคพเคเฅ เคขเฅเค เคฆเคฟเคธเคคเคพเคค"
example_title: "Example 1"
- source_sentence: "เคเคเคจเฅเคเฅ เคฎเคพเคนเคฟเคคเฅ เคฎเคฟเคณเคคเคพเค เคชเฅเคฒเคฟเคธเคพเคเคเคพ เคคเคพเคซเคพ เคคเฅเคฅเฅ เคชเฅเคนเฅเคเคฒเคพ"
sentences:
- "เคชเฅเคฒเคฟเคธเคพเคเคจเคพ เคเคเคจเฅเคเฅ เคฎเคพเคนเคฟเคคเฅ เคฎเคฟเคณเคคเคพเค เคคเฅเคฏเคพเคเคเฅ เคชเคฅเค เคเคเคจเคพเคธเฅเคฅเคณเฅ เคชเฅเคนเฅเคเคฒเฅ"
- "เคคเฅเคตเฅเคนเคพ เคชเฅเคฒเคฟเคธเคพเคเคจเฅ เคคเฅเคฏเคพเคเคเฅเคฏเคพ เคคเคเฅเคฐเคพเคฐเฅเคเฅ เคฆเคเคฒ เคเฅเคคเคฒเฅ เคจเคพเคนเฅ"
- "เคฆเคฟเคตเคธเคพเคเคพ เคเคคเฅเคคเคฐเคพเคฐเฅเคง เคเฅเคเฅเคเคฌเคพเคธเฅเคฌเคค เคฎเฅเคเคฎเคเฅเคค เคเคพเคฒเคตเคพเคฒ"
example_title: "Example 2"
- source_sentence: "เคชเคนเคฟเคฒเฅเคฏเคพ เคชเคพเค เคเคฟเคฒเฅเคฎเฅเคเคฐ เค
เคเคคเคฐเคพเคธเคพเค เฅ เคชเคพเค เคฐเฅเคชเคฏเฅ เคฆเคฐ เคเคเคพเคฐเคฃเฅเคฏเคพเคค เคฏเฅเคค เคเคนเฅ"
sentences:
- "เคชเคพเค เคฐเฅเคชเคฏเคพเคเคค เคชเคพเค เคเคฟเคฎเฅ เคชเฅเคฐเคตเคพเคธ เคเคฐเคพ"
- "เคฆเฅเคจ เค เคฟเคเคพเคฃเคพเคเคฎเคงเคฒเฅ เคฎเฅเค เฅ เค
เคเคคเคฐ เคชเฅเคฐเคตเคพเคธ เคเคฐเคฃเฅ เคเคเคเคพเคณเคตเคพเคฃเฅ เคเคนเฅ"
- "เคจเฅเคเคคเฅเคฏเคพเค เคเคพเคฒเฅเคฒเฅเคฏเคพ เคชเคพเคตเคธเคพเคฎเฅเคณเฅ เคนเคฟเคฐเคตเคณ เคฆเคฟเคธเคค เคเคนเฅ"
example_title: "Example 3"
---
# MahaSBERT-STS
A MahaSBERT model (l3cube-pune/marathi-sentence-bert-nli) fine-tuned on STS dataset. <br>
This is released as a part of project MahaNLP : https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual sentence similarity is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> indic-sentence-similarity-sbert </a> <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2211.11187)
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
timm/dla60x_c.in1k | timm | 2023-04-24T21:14:06Z | 414 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1707.06484",
"license:bsd-3-clause",
"region:us"
]
| image-classification | 2023-04-24T19:35:46Z | ---
tags:
- image-classification
- timm
library_name: timm
license: bsd-3-clause
datasets:
- imagenet-1k
---
# Model card for dla60x_c.in1k
A DLA (Deep Layer Aggregation) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 1.3
- GMACs: 0.6
- Activations (M): 6.0
- Image size: 224 x 224
- **Papers:**
- Deep Layer Aggregation: https://arxiv.org/abs/1707.06484
- **Original:** https://github.com/ucbdrive/dla
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dla60x_c.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla60x_c.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 64, 28, 28])
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 256, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla60x_c.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 256, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{yu2018deep,
title={Deep layer aggregation},
author={Yu, Fisher and Wang, Dequan and Shelhamer, Evan and Darrell, Trevor},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
year={2018}
}
```
|
timm/caformer_s36.sail_in22k | timm | 2023-05-05T05:52:41Z | 414 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-05-05T05:51:39Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for caformer_s36.sail_in22k
A CAFormer (a MetaFormer) image classification model. Trained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 82.0
- GMACs: 8.0
- Activations (M): 37.5
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('caformer_s36.sail_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s36.sail_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s36.sail_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
Yntec/SamaritanDoesArt | Yntec | 2023-08-09T10:15:40Z | 414 | 4 | diffusers | [
"diffusers",
"safetensors",
"art",
"anime",
"style",
"3D",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"jinofcoolnes",
"PromptSharingSamaritan",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-07T14:37:21Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- style
- 3D
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- jinofcoolnes
- PromptSharingSamaritan
inference: true
---
# samaritanDoesArt
Samples and prompts:


tiny baby girl. chibi.
A mix of SamDoesArtUltimerge with Samaritan 3D Cartoon v2.
Haha, if you think the only reason I mixed them was so I could name the model like this, you're right! Still, the results speak for themselves.
Original pages:
https://huggingface.co/jinofcoolnes/sammod
https://civitai.com/models/81270?modelVersionId=113299 |
TheBloke/MegaMix-T1-13B-GGUF | TheBloke | 2023-09-30T19:02:40Z | 414 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:gradientputri/MegaMix-T1-13B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-30T18:56:42Z | ---
base_model: gradientputri/MegaMix-T1-13B
inference: false
license: llama2
model_creator: Putri
model_name: Megamix T1 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Megamix T1 13B - GGUF
- Model creator: [Putri](https://huggingface.co/gradientputri)
- Original model: [Megamix T1 13B](https://huggingface.co/gradientputri/MegaMix-T1-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Putri's Megamix T1 13B](https://huggingface.co/gradientputri/MegaMix-T1-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MegaMix-T1-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MegaMix-T1-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF)
* [Putri's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/gradientputri/MegaMix-T1-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [megamix-t1-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [megamix-t1-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [megamix-t1-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [megamix-t1-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [megamix-t1-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [megamix-t1-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [megamix-t1-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [megamix-t1-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [megamix-t1-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [megamix-t1-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [megamix-t1-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [megamix-t1-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MegaMix-T1-13B-GGUF/blob/main/megamix-t1-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MegaMix-T1-13B-GGUF and below it, a specific filename to download, such as: megamix-t1-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MegaMix-T1-13B-GGUF megamix-t1-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MegaMix-T1-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MegaMix-T1-13B-GGUF megamix-t1-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m megamix-t1-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MegaMix-T1-13B-GGUF", model_file="megamix-t1-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Putri's Megamix T1 13B
No original model card was available.
<!-- original-model-card end -->
|
piazzola/address-detection-model | piazzola | 2023-10-13T18:28:38Z | 414 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-13T17:14:12Z | ---
license: cc-by-nc-2.0
base_model: facebook/opt-350m
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the [addressWithContext](https://huggingface.co/datasets/piazzola/addressWithContext) dataset.
## Model description
**Make sure to set max_new_tokens = 20; otherwise, the model will generate one token at a time.**
```
nlp = pipeline("text-generation",
model="piazzola/tmp_trainer",
max_new_tokens=20)
nlp("I live at 15 Firstfield Road.")
```
**Note that if you would like to try longer sentences using the Hosted inference API
on the right hand side on this website, you might need to click "Compute" more than one time to get the address.**
## Intended uses & limitations
The model is intended to detect addresses that occur in a sentence.
## Training and evaluation data
This model is trained on `piazzola/addressWithContext`.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1 |
TheBloke/ShiningValiant-1.2-GGUF | TheBloke | 2023-10-16T00:44:58Z | 414 | 9 | transformers | [
"transformers",
"gguf",
"llama",
"shining-valiant",
"valiant",
"valiant-labs",
"llama-2",
"llama-2-chat",
"70b",
"text-generation",
"en",
"base_model:ValiantLabs/ShiningValiant",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-15T23:57:31Z | ---
base_model: ValiantLabs/ShiningValiant
inference: false
language:
- en
license: llama2
model_creator: Valiant Labs
model_name: ShiningValiant 1.2
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
tags:
- shining-valiant
- valiant
- valiant-labs
- llama
- llama-2
- llama-2-chat
- 70b
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ShiningValiant 1.2 - GGUF
- Model creator: [Valiant Labs](https://huggingface.co/ValiantLabs)
- Original model: [ShiningValiant 1.2](https://huggingface.co/ValiantLabs/ShiningValiant)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Valiant Labs's ShiningValiant 1.2](https://huggingface.co/ValiantLabs/ShiningValiant).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ShiningValiant-1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ShiningValiant-1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF)
* [Valiant Labs's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ValiantLabs/ShiningValiant)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [shiningvaliant-1.2.Q2_K.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [shiningvaliant-1.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [shiningvaliant-1.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [shiningvaliant-1.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [shiningvaliant-1.2.Q4_0.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [shiningvaliant-1.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [shiningvaliant-1.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [shiningvaliant-1.2.Q5_0.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [shiningvaliant-1.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [shiningvaliant-1.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF/blob/main/shiningvaliant-1.2.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| shiningvaliant-1.2.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `shiningvaliant-1.2.Q6_K.gguf-split-a`
* `shiningvaliant-1.2.Q6_K.gguf-split-b`
### q8_0
Please download:
* `shiningvaliant-1.2.Q8_0.gguf-split-a`
* `shiningvaliant-1.2.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat shiningvaliant-1.2.Q6_K.gguf-split-* > shiningvaliant-1.2.Q6_K.gguf && rm shiningvaliant-1.2.Q6_K.gguf-split-*
cat shiningvaliant-1.2.Q8_0.gguf-split-* > shiningvaliant-1.2.Q8_0.gguf && rm shiningvaliant-1.2.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B shiningvaliant-1.2.Q6_K.gguf-split-a + shiningvaliant-1.2.Q6_K.gguf-split-b shiningvaliant-1.2.Q6_K.gguf
del shiningvaliant-1.2.Q6_K.gguf-split-a shiningvaliant-1.2.Q6_K.gguf-split-b
COPY /B shiningvaliant-1.2.Q8_0.gguf-split-a + shiningvaliant-1.2.Q8_0.gguf-split-b shiningvaliant-1.2.Q8_0.gguf
del shiningvaliant-1.2.Q8_0.gguf-split-a shiningvaliant-1.2.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/ShiningValiant-1.2-GGUF and below it, a specific filename to download, such as: shiningvaliant-1.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/ShiningValiant-1.2-GGUF shiningvaliant-1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/ShiningValiant-1.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ShiningValiant-1.2-GGUF shiningvaliant-1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m shiningvaliant-1.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n{prompt}[/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/ShiningValiant-1.2-GGUF", model_file="shiningvaliant-1.2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Valiant Labs's ShiningValiant 1.2

Shining Valiant is a chat model built on the Llama 2 architecture, finetuned on our data for insight, creativity, passion, and friendliness.
- Uses the llama-2-70b-chat model, with safetensors
- Finetuned on multiple runs across private and public data
- Data focused on knowledge, enthusiasm, and structured reasoning
## Version
The current version is **1.2**; congrats to our team on the new release!
Previous versions remain available in the repository. New models will be released for everyone once our team's training and validation process is complete :)
## Prompting
Shining Valiant uses the same prompt format as Llama 2 Chat - feel free to use your existing prompts and scripts!
A few examples of different formats:
1. [INST] Good morning! Can you let me know how to parse a text file and turn the semicolons into commas? [/INST]
2. [INST] (You are an intelligent, helpful AI assistant.) Hello, can you write me a thank you letter? [/INST]
3. [INST] << SYS >>You are an intelligent, helpful AI assistant.<< /SYS >>Deep dive about a country with interesting history: [/INST]
## The Model
Shining Valiant is built on top of Stellar Bright, which uses Llama 2's 70b parameter architecture and features upgraded general capability. (Stellar Bright uses public open source data only.)
From there, we've created Shining Valiant through multiple finetuning runs on different compositions of our private dataset.
Our private data focuses primarily on applying Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn!
We are actively working on expanding and improving the Shining Valiant dataset for use in future releases of this model and others.

Shining Valiant is created by Valiant Labs.
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
<!-- original-model-card end -->
|
UCSC-VLAA/ViT-H-14-CLIPA-datacomp1B | UCSC-VLAA | 2023-10-17T06:13:54Z | 414 | 1 | open_clip | [
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"dataset:mlfoundations/datacomp_1b",
"arxiv:2306.15658",
"arxiv:2305.07017",
"license:apache-2.0",
"region:us"
]
| zero-shot-image-classification | 2023-10-17T06:02:37Z | ---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apache-2.0
datasets:
- mlfoundations/datacomp_1b
---
# Model card for ViT-H-14-CLIPA-datacomp1B
A CLIPA-v2 model...
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/UCSC-VLAA/CLIPA
- **Dataset:** mlfoundations/datacomp_1b
- **Papers:**
- CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget; An Extra $4,000 Unlocks 81.8% Accuracy: https://arxiv.org/abs/2306.15658
- An Inverse Scaling Law for CLIP Training: https://arxiv.org/abs/2305.07017
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:ViT-H-14-CLIPA')
tokenizer = get_tokenizer('hf-hub:ViT-H-14-CLIPA')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]]
```
## Citation
```bibtex
@article{li2023clipav2,
title={CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget; An Extra $4,000 Unlocks 81.8% Accuracy},
author={Xianhang Li and Zeyu Wang and Cihang Xie},
journal={arXiv preprint arXiv:2306.15658},
year={2023},
}
```
```bibtex
@inproceedings{li2023clipa,
title={An Inverse Scaling Law for CLIP Training},
author={Xianhang Li and Zeyu Wang and Cihang Xie},
booktitle={NeurIPS},
year={2023},
}
```
|
TheBloke/Dolphin2.1-OpenOrca-7B-GGUF | TheBloke | 2023-11-09T21:50:24Z | 414 | 6 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:Weyaxi/Dolphin2.1-OpenOrca-7B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-11-09T19:49:12Z | ---
base_model: Weyaxi/Dolphin2.1-OpenOrca-7B
inference: false
license: cc-by-nc-4.0
model_creator: "Ethem Ya\u011F\u0131z \xC7al\u0131k"
model_name: Dolphin2.1 OpenOrca 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dolphin2.1 OpenOrca 7B - GGUF
- Model creator: [Ethem Yaฤฤฑz รalฤฑk](https://huggingface.co/Weyaxi)
- Original model: [Dolphin2.1 OpenOrca 7B](https://huggingface.co/Weyaxi/Dolphin2.1-OpenOrca-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Ethem Yaฤฤฑz รalฤฑk's Dolphin2.1 OpenOrca 7B](https://huggingface.co/Weyaxi/Dolphin2.1-OpenOrca-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF)
* [Ethem Yaฤฤฑz รalฤฑk's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Dolphin2.1-OpenOrca-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [dolphin2.1-openorca-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin2.1-openorca-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [dolphin2.1-openorca-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [dolphin2.1-openorca-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [dolphin2.1-openorca-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin2.1-openorca-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [dolphin2.1-openorca-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [dolphin2.1-openorca-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin2.1-openorca-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [dolphin2.1-openorca-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [dolphin2.1-openorca-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [dolphin2.1-openorca-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF/blob/main/dolphin2.1-openorca-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Dolphin2.1-OpenOrca-7B-GGUF and below it, a specific filename to download, such as: dolphin2.1-openorca-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Dolphin2.1-OpenOrca-7B-GGUF dolphin2.1-openorca-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Dolphin2.1-OpenOrca-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Dolphin2.1-OpenOrca-7B-GGUF dolphin2.1-openorca-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m dolphin2.1-openorca-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Dolphin2.1-OpenOrca-7B-GGUF", model_file="dolphin2.1-openorca-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Ethem Yaฤฤฑz รalฤฑk's Dolphin2.1 OpenOrca 7B
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Merge of [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge.
### *Weights*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3
### *Density*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
# Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
<!-- original-model-card end -->
|
rinna/nekomata-7b | rinna | 2024-04-03T08:48:11Z | 414 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"ja",
"en",
"dataset:mc4",
"dataset:wikipedia",
"dataset:EleutherAI/pile",
"dataset:oscar-corpus/colossal-oscar-1.0",
"dataset:cc100",
"arxiv:2309.16609",
"arxiv:2404.01657",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-12-19T06:58:44Z | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
datasets:
- mc4
- wikipedia
- EleutherAI/pile
- oscar-corpus/colossal-oscar-1.0
- cc100
language:
- ja
- en
tags:
- qwen
inference: false
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
---
# `rinna/nekomata-7b`

# Overview
We conduct continual pre-training of [qwen-7b](https://huggingface.co/Qwen/Qwen-7B) on **30B** tokens from a mixture of Japanese and English datasets. The continual pre-training significantly improves the model's performance on Japanese tasks. It also enjoys the following great features provided by the original Qwen model.
* The inclusive Qwen vocabulary (vocab size > 150k) enables the model to processs Japanese texts much more efficiently than the previously released [youri series](https://huggingface.co/collections/rinna/youri-7b-654053610cb8e9d8e6289efc).
* The model supports a maximum sequence length of 32768.
The name `nekomata` comes from the Japanese word [`็ซๅ/ใญใใพใ/Nekomata`](https://ja.wikipedia.org/wiki/%E7%8C%AB%E5%8F%88), which is a kind of Japanese mythical creature ([`ๅฆๆช/ใใใใ/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)).
* **Library**
The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
* **Model architecture**
A 32-layer, 4096-hidden-size transformer-based language model. Please refer to the [Qwen paper](https://arxiv.org/abs/2309.16609) for architecture details.
* **Continual pre-training**
The model was initialized with the [qwen-7b](https://huggingface.co/Qwen/Qwen-7B) model and continually trained on around **30B** tokens from a mixture of the following corpora
- [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
- [Japanese C4](https://huggingface.co/datasets/mc4)
- [Japanese OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
- [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- rinna curated Japanese dataset
* **Contributors**
- [Tianyu Zhao](https://huggingface.co/tianyuz)
- [Akio Kaga](https://huggingface.co/rakaga)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/nekomata-7b", trust_remote_code=True)
# Use GPU with bf16
# model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b", device_map="auto", trust_remote_code=True, bf16=True)
# Use GPU with fp16
# model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b", device_map="auto", trust_remote_code=True, fp16=True)
# Use CPU
# model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b", device_map="cpu", trust_remote_code=True)
# Automatically select device and precision
model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b", device_map="auto", trust_remote_code=True)
text = "่ฅฟ็ฐๅนพๅค้ใฏใ"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=200,
min_new_tokens=200,
do_sample=True,
temperature=1.0,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
~~~~
---
# Tokenization
The model uses the original Qwen tokenizer. It augments the [`cl100k` tiktoken tokenizer](https://github.com/openai/tiktoken) and has a vocabulary size of 151,936. The inclusive vocabulary helps the model to reach a better tokenization efficiency, especially for Japanese texts.
We compared the `Qwen` tokenizer (as used in `nekomata`) and the `llama-2` tokenizer (as used in `youri`) on different text collections and found that the Qwen tokenizer achieves a much better byte2token rate (i.e. the average number of tokens produced from 1 byte of text) as following. A lower byte2token rate indicates a better tokenization efficiency.
| Tokenizer | Japanese | English | Multilingual |
| --- | --- | --- | --- |
| Qwen | 0.24 | 0.27 | 0.27 |
| llama-2 | 0.40 | 0.29 | 0.36 |
---
# How to cite
~~~
@misc{rinna-nekomata-7b,
title = {rinna/nekomata-7b},
author={Zhao, Tianyu and Kaga, Akio and Sawada, Kei}
url = {https://huggingface.co/rinna/nekomata-7b},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
~~~
---
# References
~~~
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
~~~
---
# License
[Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) |
shrikant11/pokemon_text_to_image_2 | shrikant11 | 2024-01-04T07:32:52Z | 414 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:lambdalabs/pokemon-blip-captions",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-01-04T07:25:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
datasets:
- lambdalabs/pokemon-blip-captions
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - shrikant11/pokemon_text_to_image_2
This bla bla pipeline was finetuned from **runwayml/stable-diffusion-v1-5** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Pokemon with yellow eyes', 'Green colour pokemon', 'Blue colour pikacchu', 'Charlizzard', 'pikachu', 'dangerous looking pokemon']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("shrikant11/pokemon_text_to_image_2", torch_dtype=torch.float16)
prompt = "Pokemon with yellow eyes"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 1
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 1
* Image resolution: 512
* Mixed-precision: None
More information on all the CLI arguments and the environment are available on your [`wandb` run page]().
|
Raelina/RaemuXL | Raelina | 2024-04-27T13:41:18Z | 414 | 7 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"base_model:cagliostrolab/animagine-xl-3.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-01-19T08:18:21Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-xl
base_model: cagliostrolab/animagine-xl-3.1
---
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: 'Helvetica Neue', sans-serif;
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #ff7a52, #a5cff0);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 2em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 10px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 10px;
transition: transform .7s;
}
.custom-image-container:hover {
transform: scale(1.05);
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.nsfw-filter {
filter: blur(8px);
transition: filter 0.3s ease;
}
.custom-image-container:hover .nsfw-filter {
filter: none;
}
.overlay {
position: absolute;
bottom: 0;
left: 0;
right: 0;
color: white;
width: 100%;
height: 40%;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
font-size: 1vw;
font-style: bold;
text-align: center;
opacity: 0;
background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
transition: opacity .5s;
}
.custom-image-container:hover .overlay {
opacity: 1;
}
.overlay-text {
background: linear-gradient(45deg, #7ed56f, #28b485);
-webkit-background-clip: text;
color: transparent;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
}
.overlay-subtext {
font-size: 0.75em;
margin-top: 0.5em;
font-style: italic;
}
.overlay,
.overlay-subtext {
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
}
</style>
<h1 class="title">
<span>Raemu XL</span>
</h1>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/obp1kCabBf94rBIC9acHE.png" alt="Sample Image 1">
<div class="overlay">
<div class="overlay-text">Sample Image</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/1Bt9u5eTutKoq7IcYW6hr.png" alt="Sample Image 2">
<div class="overlay">
<div class="overlay-text">Sample Image</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/axIYI4IfGIUibEY7Uk9EC.png" alt="Sample Image 3">
<div class="overlay">
<div class="overlay-text">Sample Image</div>
</div>
</div>
</td>
</tr>
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/1JbzlFalbYu6Bsp25N9N5.png" alt="Sample Image 4">
<div class="overlay">
<div class="overlay-text">Sample Image</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/MTweqSnjw0wIpFaaNI6_U.png" alt="Sample Image 5">
<div class="overlay">
<div class="overlay-text">Sample Image</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/Os-_lyzIgB4n5M5WyawLT.jpeg" alt="Sample Image 6">
<div class="overlay">
<div class="overlay-text">Sample Image</div>
</div>
</div>
</td>
</tr>
</table>
**Raemu XL** is a merged model that focused in 2.5D Anime
## Model Details
- **Developed by**: [Raelina](https://civitai.com/user/Raelina)
- **Model type**: Diffusion-based text-to-image generative model
- **Model Description**: Generate high-quality anime images from textual prompts
- **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
- **Merged from model**: [Animagine XL 3.1](https://huggingface.co/cagliostrolab/animagine-xl-3.1)
## Recommended settings
To guide the model towards generating high-aesthetic images, use negative prompts like:
```
(worst quality, low quality, very displeasing, lowres), (interlocked fingers, badly drawn hands and fingers, anatomically incorrect hands), blurry, watermark,
```
For higher quality outcomes, prepend prompts with:
```
(masterpiece, best quality, very aesthetic, ultra detailed), intricate details,
```
### Multi Aspect Resolution
This model supports generating images at the following dimensions:
| Dimensions | Aspect Ratio |
|-------------------|-----------------|
| `1024 x 1024` | 1:1 Square |
| `1152 x 896` | 9:7 |
| `896 x 1152` | 7:9 |
| `1216 x 832` | 19:13 |
| `832 x 1216` | 13:19 |
| `1344 x 768` | 7:4 Horizontal |
| `768 x 1344` | 4:7 Vertical |
| `1536 x 640` | 12:5 Horizontal |
| `640 x 1536` | 5:12 Vertical |
## Hires.fix Setting
- Upscaler : [4x_NMKD-YandereNeoXL](https://nmkd.de/?esrgan)
- Hires step : 10-20
- Denoising : 0.1-0.4 or 0.55 for latent upscaler
## Lightning Settings
- Sampler : Euler a
- Sampling steps : 8-10
- CFG: 2-2.5
- Hires step : 4-6
- Denoising : 0.1-0.3
-
## Merge parameters
1. Animagine XL 3.1 merged to [RealCartoonXL V6](https://civitai.com/models/125907/realcartoon-xl) to get 2.5D body using MBW (0.0,1.0,0.8,0.5,0.25,0.0,0.0,0.0,0.0,0.0,0.0,0.3,0.5,0.71,1.0,0.56,0.71,1.0,0.83,0.1,0)
2. (1) merged with [Blue Pencil XL v3.1.0](https://civitai.com/models/119012/bluepencil-xl) to get final touch with anime using MBW (0.0,0.11,0.22,0.33,0.44,0.55,0.44,0.33,0.22,0.11,0.0,0.11,0.22,0.33,0.44,0.55,0.44,0.33,0.22,0.11,0)
3. RaemuXLv3
## Lightning Parameter
1. RaemuXLv3 merged to lora Lightning 4 step with ratio 0.8
2. (1) Finetuning with 860 High Quality Images
3. RaemuXLv3.5_Lightning
## License
Raemu XL now uses the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) inherited from Animagine XL 3.0, compatible with Stable Diffusion models. Key points:
1. **Modification Sharing:** If you modify RaemuXL, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
The choice of this license aims to keep Raemu XL open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.
|
ndavidson/iNAM-2.7B-v1.0-beta | ndavidson | 2024-04-08T19:00:15Z | 414 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"phi",
"text-generation",
"networking",
"cisco",
"conversational",
"en",
"dataset:ndavidson/nexus_products",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-07T08:36:52Z | ---
license: apache-2.0
datasets:
- ndavidson/nexus_products
language:
- en
pipeline_tag: text-generation
tags:
- networking
- cisco
---
# Cisco iNAM
Cisco iNAM (Intelligent Networking, Automation, and Management), is a nano sized LLM used for asking questions about Cisco Datacenter Products. It is finetuned from the pretrained Phi-2 model from Microsoft Research.
## Model Details
### Model Description
Model is quantized to 4-bit to be able to run inference on physical deployments of datacenter products. Initial launch is planned for Nexus Dashboard.
- **Developed by:** Cisco
- **Funded by [optional]:** Cisco
- **Model type:** Transformer
- **Language(s) (NLP):** English
- **License:** Cisco Commercial
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Prompt Format
iNAM uses ChatML as the prompt format.
It's recommended to always prompt with a system instruction (use whatever system prompt you like):
```
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
``` |
pengql/checkpoint-9000 | pengql | 2024-04-22T06:42:29Z | 414 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"feature-extraction",
"mteb",
"model-index",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| feature-extraction | 2024-04-19T02:26:54Z | ---
tags:
- mteb
model-index:
- name: checkpoint-9000
results:
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.191
- type: map_at_10
value: 34.272999999999996
- type: map_at_100
value: 36.101
- type: map_at_1000
value: 36.231
- type: map_at_3
value: 30.495
- type: map_at_5
value: 32.54
- type: mrr_at_1
value: 35.434
- type: mrr_at_10
value: 43.15
- type: mrr_at_100
value: 44.155
- type: mrr_at_1000
value: 44.211
- type: mrr_at_3
value: 40.735
- type: mrr_at_5
value: 42.052
- type: ndcg_at_1
value: 35.434
- type: ndcg_at_10
value: 40.572
- type: ndcg_at_100
value: 47.921
- type: ndcg_at_1000
value: 50.314
- type: ndcg_at_3
value: 35.671
- type: ndcg_at_5
value: 37.635000000000005
- type: precision_at_1
value: 35.434
- type: precision_at_10
value: 9.067
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.181
- type: precision_at_3
value: 20.163
- type: precision_at_5
value: 14.624
- type: recall_at_1
value: 23.191
- type: recall_at_10
value: 50.318
- type: recall_at_100
value: 80.958
- type: recall_at_1000
value: 97.16799999999999
- type: recall_at_3
value: 35.57
- type: recall_at_5
value: 41.776
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 64.015
- type: map_at_10
value: 71.983
- type: map_at_100
value: 72.432
- type: map_at_1000
value: 72.441
- type: map_at_3
value: 69.92399999999999
- type: map_at_5
value: 71.177
- type: mrr_at_1
value: 64.173
- type: mrr_at_10
value: 71.985
- type: mrr_at_100
value: 72.425
- type: mrr_at_1000
value: 72.434
- type: mrr_at_3
value: 69.968
- type: mrr_at_5
value: 71.222
- type: ndcg_at_1
value: 64.173
- type: ndcg_at_10
value: 75.929
- type: ndcg_at_100
value: 77.961
- type: ndcg_at_1000
value: 78.223
- type: ndcg_at_3
value: 71.828
- type: ndcg_at_5
value: 74.066
- type: precision_at_1
value: 64.173
- type: precision_at_10
value: 8.924999999999999
- type: precision_at_100
value: 0.985
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 25.887
- type: precision_at_5
value: 16.669999999999998
- type: recall_at_1
value: 64.015
- type: recall_at_10
value: 88.251
- type: recall_at_100
value: 97.471
- type: recall_at_1000
value: 99.579
- type: recall_at_3
value: 77.292
- type: recall_at_5
value: 82.666
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.983999999999998
- type: map_at_10
value: 75.175
- type: map_at_100
value: 78.27300000000001
- type: map_at_1000
value: 78.322
- type: map_at_3
value: 51.214999999999996
- type: map_at_5
value: 64.89200000000001
- type: mrr_at_1
value: 83.89999999999999
- type: mrr_at_10
value: 89.563
- type: mrr_at_100
value: 89.64999999999999
- type: mrr_at_1000
value: 89.654
- type: mrr_at_3
value: 89.167
- type: mrr_at_5
value: 89.492
- type: ndcg_at_1
value: 83.89999999999999
- type: ndcg_at_10
value: 83.72800000000001
- type: ndcg_at_100
value: 87.064
- type: ndcg_at_1000
value: 87.504
- type: ndcg_at_3
value: 81.318
- type: ndcg_at_5
value: 80.667
- type: precision_at_1
value: 83.89999999999999
- type: precision_at_10
value: 40.699999999999996
- type: precision_at_100
value: 4.7780000000000005
- type: precision_at_1000
value: 0.488
- type: precision_at_3
value: 73.317
- type: precision_at_5
value: 62.129999999999995
- type: recall_at_1
value: 23.983999999999998
- type: recall_at_10
value: 86.412
- type: recall_at_100
value: 96.882
- type: recall_at_1000
value: 99.22
- type: recall_at_3
value: 54.769999999999996
- type: recall_at_5
value: 71.663
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 51.6
- type: map_at_10
value: 61.209
- type: map_at_100
value: 61.734
- type: map_at_1000
value: 61.75000000000001
- type: map_at_3
value: 58.8
- type: map_at_5
value: 60.165
- type: mrr_at_1
value: 51.6
- type: mrr_at_10
value: 61.209
- type: mrr_at_100
value: 61.734
- type: mrr_at_1000
value: 61.75000000000001
- type: mrr_at_3
value: 58.8
- type: mrr_at_5
value: 60.165
- type: ndcg_at_1
value: 51.6
- type: ndcg_at_10
value: 66.13900000000001
- type: ndcg_at_100
value: 68.65400000000001
- type: ndcg_at_1000
value: 69.057
- type: ndcg_at_3
value: 61.185
- type: ndcg_at_5
value: 63.651
- type: precision_at_1
value: 51.6
- type: precision_at_10
value: 8.17
- type: precision_at_100
value: 0.9339999999999999
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 22.7
- type: precision_at_5
value: 14.82
- type: recall_at_1
value: 51.6
- type: recall_at_10
value: 81.69999999999999
- type: recall_at_100
value: 93.4
- type: recall_at_1000
value: 96.6
- type: recall_at_3
value: 68.10000000000001
- type: recall_at_5
value: 74.1
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 69.54599999999999
- type: map_at_10
value: 78.477
- type: map_at_100
value: 78.743
- type: map_at_1000
value: 78.751
- type: map_at_3
value: 76.769
- type: map_at_5
value: 77.854
- type: mrr_at_1
value: 71.819
- type: mrr_at_10
value: 79.008
- type: mrr_at_100
value: 79.24
- type: mrr_at_1000
value: 79.247
- type: mrr_at_3
value: 77.55300000000001
- type: mrr_at_5
value: 78.477
- type: ndcg_at_1
value: 71.819
- type: ndcg_at_10
value: 81.947
- type: ndcg_at_100
value: 83.112
- type: ndcg_at_1000
value: 83.325
- type: ndcg_at_3
value: 78.758
- type: ndcg_at_5
value: 80.563
- type: precision_at_1
value: 71.819
- type: precision_at_10
value: 9.792
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 29.479
- type: precision_at_5
value: 18.659
- type: recall_at_1
value: 69.54599999999999
- type: recall_at_10
value: 92.053
- type: recall_at_100
value: 97.25399999999999
- type: recall_at_1000
value: 98.926
- type: recall_at_3
value: 83.682
- type: recall_at_5
value: 87.944
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 50.3
- type: map_at_10
value: 55.824
- type: map_at_100
value: 56.379999999999995
- type: map_at_1000
value: 56.440999999999995
- type: map_at_3
value: 54.400000000000006
- type: map_at_5
value: 55.235
- type: mrr_at_1
value: 50.4
- type: mrr_at_10
value: 55.88999999999999
- type: mrr_at_100
value: 56.447
- type: mrr_at_1000
value: 56.508
- type: mrr_at_3
value: 54.467
- type: mrr_at_5
value: 55.30199999999999
- type: ndcg_at_1
value: 50.3
- type: ndcg_at_10
value: 58.577999999999996
- type: ndcg_at_100
value: 61.49099999999999
- type: ndcg_at_1000
value: 63.161
- type: ndcg_at_3
value: 55.64
- type: ndcg_at_5
value: 57.13399999999999
- type: precision_at_1
value: 50.3
- type: precision_at_10
value: 6.7299999999999995
- type: precision_at_100
value: 0.814
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.733
- type: precision_at_5
value: 12.559999999999999
- type: recall_at_1
value: 50.3
- type: recall_at_10
value: 67.30000000000001
- type: recall_at_100
value: 81.39999999999999
- type: recall_at_1000
value: 94.69999999999999
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 62.8
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 27.293
- type: map_at_10
value: 76.618
- type: map_at_100
value: 80.22500000000001
- type: map_at_1000
value: 80.292
- type: map_at_3
value: 53.856
- type: map_at_5
value: 66.158
- type: mrr_at_1
value: 89.659
- type: mrr_at_10
value: 92.121
- type: mrr_at_100
value: 92.214
- type: mrr_at_1000
value: 92.218
- type: mrr_at_3
value: 91.67
- type: mrr_at_5
value: 91.955
- type: ndcg_at_1
value: 89.659
- type: ndcg_at_10
value: 84.172
- type: ndcg_at_100
value: 87.767
- type: ndcg_at_1000
value: 88.419
- type: ndcg_at_3
value: 85.628
- type: ndcg_at_5
value: 84.155
- type: precision_at_1
value: 89.659
- type: precision_at_10
value: 41.914
- type: precision_at_100
value: 4.9959999999999996
- type: precision_at_1000
value: 0.515
- type: precision_at_3
value: 74.955
- type: precision_at_5
value: 62.771
- type: recall_at_1
value: 27.293
- type: recall_at_10
value: 83.004
- type: recall_at_100
value: 94.82300000000001
- type: recall_at_1000
value: 98.15
- type: recall_at_3
value: 55.455
- type: recall_at_5
value: 69.422
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 59.599999999999994
- type: map_at_10
value: 69.44399999999999
- type: map_at_100
value: 69.798
- type: map_at_1000
value: 69.81
- type: map_at_3
value: 67.467
- type: map_at_5
value: 68.692
- type: mrr_at_1
value: 59.599999999999994
- type: mrr_at_10
value: 69.44399999999999
- type: mrr_at_100
value: 69.798
- type: mrr_at_1000
value: 69.81
- type: mrr_at_3
value: 67.467
- type: mrr_at_5
value: 68.692
- type: ndcg_at_1
value: 59.599999999999994
- type: ndcg_at_10
value: 73.936
- type: ndcg_at_100
value: 75.688
- type: ndcg_at_1000
value: 75.942
- type: ndcg_at_3
value: 69.92399999999999
- type: ndcg_at_5
value: 72.14
- type: precision_at_1
value: 59.599999999999994
- type: precision_at_10
value: 8.790000000000001
- type: precision_at_100
value: 0.9610000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 16.48
- type: recall_at_1
value: 59.599999999999994
- type: recall_at_10
value: 87.9
- type: recall_at_100
value: 96.1
- type: recall_at_1000
value: 98
- type: recall_at_3
value: 77
- type: recall_at_5
value: 82.39999999999999
---
|
mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF | mradermacher | 2024-05-05T14:52:37Z | 414 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"en",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-02T12:10:47Z | ---
base_model: NousResearch/Meta-Llama-3-8B-Instruct
extra_gated_button_content: Submit
extra_gated_fields:
Affiliation: text
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
Country: country
Date of birth: date_picker
First Name: text
Last Name: text
geo: ip_location
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version
Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use,
reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\"
means the specifications, manuals and documentation accompanying Meta Llama 3 distributed
by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you,
or your employer or any other person or entity (if you are entering into this Agreement
on such person or entityโs behalf), of the age required under applicable laws, rules
or regulations to provide legal consent and that has legal authority to bind your
employer or such other person or entity if you are entering in this Agreement on
their behalf.\n\"Meta Llama 3\" means the foundational large language models and
software and algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and other
elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama
Materials\" means, collectively, Metaโs proprietary Meta Llama 3 and Documentation
(and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\"
means Meta Platforms Ireland Limited (if you are located in or, if you are an entity,
your principal place of business is in the EEA or Switzerland) and Meta Platforms,
Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights
and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Metaโs intellectual property
or other rights owned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works of, and make modifications to the Llama
Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the
Llama Materials (or any derivative works thereof), or a product or service that
uses any of them, including another AI model, you shall (A) provide a copy of this
Agreement with any such Llama Materials; and (B) prominently display โBuilt with
Meta Llama 3โ on a related website, user interface, blogpost, about page, or product
documentation. If you use the Llama Materials to create, train, fine tune, or otherwise
improve an AI model, which is distributed or made available, you shall also include
โLlama 3โ at the beginning of any such AI model name.\nii. If you receive Llama
Materials, or any derivative works thereof, from a Licensee as part of an integrated
end user product, then Section 2 of this Agreement will not apply to you.\niii.
You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a โNoticeโ text file distributed as a part of such copies:
โMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright ยฉ
Meta Platforms, Inc. All Rights Reserved.โ\niv. Your use of the Llama Materials
must comply with applicable laws and regulations (including trade compliance laws
and regulations) and adhere to the Acceptable Use Policy for the Llama Materials
(available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated
by reference into this Agreement.\nv. You will not use the Llama Materials or any
output or results of the Llama Materials to improve any other large language model
(excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial
Terms. If, on the Meta Llama 3 version release date, the monthly active users of
the products or services made available by or for Licensee, or Licenseeโs affiliates,
is greater than 700 million monthly active users in the preceding calendar month,
you must request a license from Meta, which Meta may grant to you in its sole discretion,
and you are not authorized to exercise any of the rights under this Agreement unless
or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS
THEREFROM ARE PROVIDED ON AN โAS ISโ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND
META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,
OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING
THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY
RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4.
Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,
OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,
SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META
OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5.
Intellectual Property.\na. No trademark licenses are granted under this Agreement,
and in connection with the Llama Materials, neither Meta nor Licensee may use any
name or mark owned by or associated with the other or any of its affiliates, except
as required for reasonable and customary use in describing and redistributing the
Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license
to use โLlama 3โ (the โMarkโ) solely as required to comply with the last sentence
of Section 1.b.i. You will comply with Metaโs brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising
out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Metaโs
ownership of Llama Materials and derivatives made by or for Meta, with respect to
any derivative works and modifications of the Llama Materials that are made by you,
as between you and Meta, you are and will be the owner of such derivative works
and modifications.\nc. If you institute litigation or other proceedings against
Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging
that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any
of the foregoing, constitutes infringement of intellectual property or other rights
owned or licensable by you, then any licenses granted to you under this Agreement
shall terminate as of the date such litigation or claim is filed or instituted.
You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.\n6.
Term and Termination. The term of this Agreement will commence upon your acceptance
of this Agreement or access to the Llama Materials and will continue in full force
and effect until terminated in accordance with the terms and conditions herein.
Meta may terminate this Agreement if you are in breach of any term or condition
of this Agreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of
this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed
and construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International Sale
of Goods does not apply to this Agreement. The courts of California shall have exclusive
jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable
Use Policy\nMeta is committed to promoting safe and fair use of its tools and features,
including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable
Use Policy (โPolicyโ). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n####
Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You
agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the
law or othersโ rights, including to:\n 1. Engage in, promote, generate, contribute
to, encourage, plan, incite, or further illegal or unlawful activity or content,
such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children,
including the solicitation, creation, acquisition, or dissemination of child exploitative
content or failure to report Child Sexual Abuse Material\n 3. Human trafficking,
exploitation, and sexual violence\n 4. The illegal distribution of information
or materials to minors, including obscene materials, or failure to employ legally
required age-gating in connection with such information or materials.\n 5.
Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote,
incite, or facilitate the harassment, abuse, threatening, or bullying of individuals
or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods and
services\n 4. Engage in the unauthorized or unlicensed practice of any profession
including, but not limited to, financial, legal, medical/health, or related professional
practices\n 5. Collect, process, disclose, generate, or infer health, demographic,
or other sensitive personal or private information about individuals without rights
and consents required by applicable laws\n 6. Engage in or facilitate any action
or generate any content that infringes, misappropriates, or otherwise violates any
third-party rights, including the outputs or results of any products or services
using the Llama Materials\n 7. Create, generate, or facilitate the creation of
malicious code, malware, computer viruses or do anything else that could disable,
overburden, interfere with or impair the proper working, integrity, operation or
appearance of a website or computer system\n2. Engage in, promote, incite, facilitate,
or assist in the planning or development of activities that present a risk of death
or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n
\ 1. Military, warfare, nuclear industries or applications, espionage, use for
materials or activities that are subject to the International Traffic Arms Regulations
(ITAR) maintained by the United States Department of State\n 2. Guns and illegal
weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled
substances\n 4. Operation of critical infrastructure, transportation technologies,
or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting,
and eating disorders\n 6. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive
or mislead others, including use of Meta Llama 3 related to the following:\n 1.
Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n
\ 2. Generating, promoting, or furthering defamatory content, including the creation
of defamatory statements, images, or other content\n 3. Generating, promoting,
or further distributing spam\n 4. Impersonating another individual without consent,
authorization, or legal right\n 5. Representing that the use of Meta Llama 3
or outputs are human-generated\n 6. Generating or facilitating false online engagement,
including fake reviews and other means of fake online engagement\n4. Fail to appropriately
disclose to end users any known dangers of your AI system\nPlease report any violation
of this Policy, software โbug,โ or other problems that could lead to a violation
of this Policy through one of the following means:\n * Reporting issues with
the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting
violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: llama3
quantized_by: mradermacher
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ssec-uw/OLMo-7B-Instruct-GGUF | ssec-uw | 2024-05-23T00:48:02Z | 414 | 4 | null | [
"gguf",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-05-07T17:55:34Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# OLMo 7B-Instruct-GGUF
> For more details on OLMO-7B-Instruct, refer to [Allen AI's OLMo-7B-Instruct model card](https://huggingface.co/allenai/OLMo-7B-Instruct).
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
The Instruct version is trained on the [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned).
OLMo 7B Instruct is trained for better question answering. They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques.
This version of the model is derived from [ssec-uw/OLMo-7B-Instruct-hf](https://huggingface.co/ssec-uw/OLMo-7B-Instruct-hf) as [GGUF format](https://huggingface.co/docs/hub/en/gguf),
a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes.
In addition to the model being in GGUF format, the model has been [quantized](https://huggingface.co/docs/optimum/en/concept_guides/quantization),
to reduce the computational and memory costs of running inference. *We are currently working on adding all of the [Quantization Types](https://huggingface.co/docs/hub/en/gguf#quantization-types)*.
These files are designed for use with [GGML](https://ggml.ai/) and executors based on GGML such as [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Get Started
To get started using one of the GGUF file, you can simply use [llama-cpp-python](https://github.com/abetlen/llama-cpp-python),
a Python binding for `llama.cpp`.
1. Install `llama-cpp-python` of at least `v0.2.70` with pip.
The following command will install a pre-built wheel with basic CPU support.
For other installation methods, see [llama-cpp-python installation docs](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation).
```bash
pip install llama-cpp-python>=0.2.70 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
```
3. Download one of the GGUF file. In this example,
we will download the [OLMo-7B-Instruct-Q4_K_M.gguf](https://huggingface.co/ssec-uw/OLMo-7B-Instruct-GGUF/resolve/main/OLMo-7B-Instruct-Q4_K_M.gguf?download=true),
when the link is clicked.
4. Open up a python interpreter and run the following commands.
For example, we can ask it: `What is a solar system?`
*You will need to modify the `model_path` argument to where
the GGUF model has been saved in your system*
```python
from llama_cpp import Llama
llm = Llama(
model_path="path/to/OLMo-7B-Instruct-Q4_K_M.gguf"
)
result_dict = llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is a solar system?"
}
]
)
print(result_dict['choices'][0]['message']['content'])
```
5. That's it, you should see the result fairly quickly! Have fun! ๐ค
## Contact
For errors in this model card, contact Don or Anant, {landungs, anmittal} at uw dot edu.
## Acknowledgement
We would like to thank the hardworking folks at [Allen AI](https://huggingface.co/allenai) for providing the original model.
Additionally, the work to convert and quantize the model was done by the
[University of Washington Scientific Software Engineering Center (SSEC)](https://escience.washington.edu/software-engineering/ssec/),
as part of the [Schmidt Sciences Virtual Institute for Scientific Software (VISS)](https://www.schmidtsciences.org/viss/).
|
mradermacher/Maverick-8B-GGUF | mradermacher | 2024-05-11T13:00:33Z | 414 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:bunnycore/Maverick-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-11T11:15:30Z | ---
base_model: bunnycore/Maverick-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bunnycore/Maverick-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AIGym/TinyLlama-1.1B-Chat-v1.0-function-calling | AIGym | 2024-05-24T10:04:56Z | 414 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-24T01:21:51Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LiteLLMs/falcon-11B-GGUF | LiteLLMs | 2024-05-24T15:15:21Z | 414 | 0 | null | [
"gguf",
"GGUF",
"en",
"de",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ro",
"cs",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2311.16867",
"license:unknown",
"region:us"
]
| null | 2024-05-24T15:00:57Z |
---
language:
- en
- de
- es
- fr
- it
- nl
- pl
- pt
- ro
- cs
license: unknown
tags:
- GGUF
datasets:
- tiiuae/falcon-refinedweb
inference: false
quantized_by: andrijdavid
---
# falcon-11B-GGUF
- Original model: [falcon-11B](https://huggingface.co/tiiuae/falcon-11B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [falcon-11B](https://huggingface.co/tiiuae/falcon-11B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/falcon-11B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/falcon-11B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/falcon-11B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/falcon-11B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: falcon-11B
# ๐ Falcon2-11B
**Falcon2-11B is an 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI.**
*Paper coming soon ๐.*
๐ค To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://huggingface.co/blog/falcon)!
โ ๏ธ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-11B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
)
sequences = pipeline(
"Can you explain the concepts of Quantum Computing?",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
๐ฅ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
# Model Card for Falcon2-11B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
- **Model type:** Causal decoder-only
- **Language(s) (NLP):** English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish
- **License:** [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html)
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon2-11B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon2-11B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-11B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Can you explain the concepts of Quantum Computing?",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon2-11B was trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a four stage training strategy. The first three stages were focused on increasing the context length, from to 2048 to 4096 and finally to 8192 tokens. The last stage aimed to further enhance performance using only high quality data.
Overall, the data sources included RefinedWeb-English, Refined Web-Europe (cs, de, es, fr, it, nl, pl, pt, ro, sv), high quality technical data, code data, and conversational data extracted from public sources.
The training stages were as follows:
| **Stage** | **Context length** | **Tokens** |
| - |
| Stage 1 | 2048 | 4500 B |
| Stage 2 | 4096 | 250 B |
| Stage 3 | 8192 | 250 B |
| Stage 4 | 8192 | 500 B |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer.
### Training Procedure
Falcon2-11B was trained on 1024 A100 40GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
| | | | --- |
| Layers | 60 | |
| `d_model` | 4096 | |
| `head_dim` | 128 | |
| Vocabulary | 65024 | |
| Sequence length | 8192 | During stages 3 and 4 |
### Compute Infrastructure
#### Hardware
Falcon2-11B was trained on AWS SageMaker, using on average 1024 A100 40GB GPUs in 128 p4d instances.
#### Software
Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels and FlashAttention-2. More details about the distributed training strategy can be found in [Almazrouei et.al](https://arxiv.org/abs/2311.16867).
## Citation
*Paper coming soon* ๐.
## License
Falcon2-11B is licenced under [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI.
## Contact
[email protected]
<!-- original-model-card end -->
|
RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf | RichardErkhov | 2024-05-30T04:24:59Z | 414 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T01:35:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Dolphin-Nebula-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/Dolphin-Nebula-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Dolphin-Nebula-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Dolphin-Nebula-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Dolphin-Nebula-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Dolphin-Nebula-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Dolphin-Nebula-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Dolphin-Nebula-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Dolphin-Nebula-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Dolphin-Nebula-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Dolphin-Nebula-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Dolphin-Nebula-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Dolphin-Nebula-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Dolphin-Nebula-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Dolphin-Nebula-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Dolphin-Nebula-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Dolphin-Nebula-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Dolphin-Nebula-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Dolphin-Nebula-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Dolphin-Nebula-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Dolphin-Nebula-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Dolphin-Nebula-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Dolphin-Nebula-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [Dolphin-Nebula-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Dolphin-Nebula-7B-gguf/blob/main/Dolphin-Nebula-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# Dolphin-Nebula-7B
Dolphin-Nebula-7B is a merge of [ehartford/dolphin-2.0-mistral-7b](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) and [PulsarAI/Nebula-7B-Lora](https://huggingface.co/PulsarAI/Nebula-7B-Lora)
# Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
|
KrishRahul/your-finetuned-model | KrishRahul | 2024-06-03T23:12:59Z | 414 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-03T23:01:11Z | Entry not found |
mradermacher/Stheno-Mix-L2-20B-i1-GGUF | mradermacher | 2024-06-05T08:42:02Z | 414 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Stheno-Mix-L2-20B",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-04T23:58:17Z | ---
base_model: Sao10K/Stheno-Mix-L2-20B
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Stheno-Mix-L2-20B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mix-L2-20B-i1-GGUF/resolve/main/Stheno-Mix-L2-20B.i1-Q6_K.gguf) | i1-Q6_K | 17.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
supersolar/imgdata | supersolar | 2024-06-06T06:07:20Z | 414 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-06T05:44:37Z | ---
license: apache-2.0
---
|
eminAydin/turkish-gpt2-mini-M1-cleaned-sports720k-10ep | eminAydin | 2024-06-08T10:32:13Z | 414 | 3 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-07T07:46:06Z | Entry not found |
lltutor/Llama-2-7b-chat-hf-SW2-test-fine-tuned-cpu | lltutor | 2024-06-12T06:53:34Z | 414 | 0 | null | [
"gguf",
"license:llama2",
"region:us"
]
| null | 2024-06-10T14:55:21Z | ---
license: llama2
---
|
alvdansen/colorized-blockprints | alvdansen | 2024-06-16T18:15:49Z | 414 | 8 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-16T18:08:21Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
A teenage girl with braids and freckles, wearing a colorful t-shirt and
overalls, riding a bicycle through a sunny park with a basket of flowers on
the handlebars
output:
url: images/ComfyUI_01776_.png
- text: >-
A young man with tousled brown hair and green eyes, wearing a casual hoodie
and jeans, sitting at a coffee shop with a laptop and a cup of coffee,
surrounded by cozy dรฉcor
output:
url: images/ComfyUI_01771_.png
- text: >-
"A Victorian-era woman with auburn hair styled in elegant curls, wearing a
high-collared dress with intricate lace details
output:
url: images/ComfyUI_01768_.png
- text: >-
A serene autumn forest at sunset, golden light streaming through vibrant
orange and red leaves, a tranquil, ethereal deer standing in a small
clearing, mist hovering above the forest floor, adding a mystical atmosphere
to the scene. The deer's eyes are wise and calm, reflecting the gentle hues
of the setting sun. The background fades into soft, blurred shadows of
trees, creating a sense of depth and isolation. The mood is peaceful and
introspective, inviting contemplation
output:
url: images/ComfyUI_01106_ - Copy.png
- text: a girl, blue jacket
output:
url: images/ComfyUI_01104_.png
- text: a man far in the distance on a beach
output:
url: images/ComfyUI_01097_.png
- text: a zoomed out image of a girl, wind in her hair, beautiful, in the distance
output:
url: images/ComfyUI_01094_.png
- text: >-
Majestic lion with a flowing mane standing atop a rocky cliff at sunset,
surrounded by sparse savannah grass; mood of serenity and power
output:
url: images/ComfyUI_01092_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: creativeml-openrail-m
---
# Colorized Blockprint
<Gallery />
## Model description
This model was meant to revisit the BW Manga model. However, it took on a life of it's own and so I am releasing it as a stand alone rather than a V2. It handles distance much better than BW Manga.
Sometimes it may need a "blockprint style" or "ink illustration" similar positive prompt if the prompt is getting really complex.
This model is meant for fun or research - if you would like to offer it in a commercial service please contact me.
## Download model
Weights for this model are available in Safetensors format.
[Download](/alvdansen/colorized-blockprints/tree/main) them in the Files & versions tab.
|
hyhf/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF | hyhf | 2024-06-20T08:41:07Z | 414 | 1 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
]
| text-generation | 2024-06-20T08:40:45Z | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entityโs behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Metaโs proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Metaโs intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display โBuilt with Meta Llama 3โ on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include โLlama 3โ at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a โNoticeโ text file distributed as a part of such copies: โMeta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright ยฉ Meta Platforms,\
\ Inc. All Rights Reserved.โ\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licenseeโs affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN โAS ISโ BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use โLlama 3โ (the โMarkโ) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Metaโs brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Metaโs ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (โPolicyโ). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or othersโ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software โbug,โ or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# hyhf/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hyhf/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hyhf/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hyhf/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hyhf/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q4_k_m.gguf -c 2048
```
|
cleatherbury/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO-IQ4_NL-GGUF | cleatherbury | 2024-06-21T04:24:17Z | 414 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:bunkalab/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-21T04:24:07Z | ---
base_model: bunkalab/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# cleatherbury/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO-IQ4_NL-GGUF
This model was converted to GGUF format from [`bunkalab/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO`](https://huggingface.co/bunkalab/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunkalab/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cleatherbury/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-gpt4choice-4.6k-dpo-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cleatherbury/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-gpt4choice-4.6k-dpo-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cleatherbury/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-gpt4choice-4.6k-dpo-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cleatherbury/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO-IQ4_NL-GGUF --hf-file phi-3-mini-128k-instruct-gpt4choice-4.6k-dpo-iq4_nl-imat.gguf -c 2048
```
|
Felladrin/gguf-sharded-Qwen1.5-0.5B-Chat_llamafy | Felladrin | 2024-06-23T02:27:47Z | 414 | 0 | null | [
"gguf",
"base_model:Minami-su/Qwen1.5-0.5B-Chat_llamafy",
"license:other",
"region:us"
]
| null | 2024-06-23T02:23:57Z | ---
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE
base_model: Minami-su/Qwen1.5-0.5B-Chat_llamafy
---
Sharded GGUF version of [Minami-su/Qwen1.5-0.5B-Chat_llamafy](https://huggingface.co/Minami-su/Qwen1.5-0.5B-Chat_llamafy). |
bagdaebhishek/IndianPoliticalTweetsLM | bagdaebhishek | 2021-09-22T07:49:02Z | 413 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"lm-head",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg
tags:
- India
- politics
- tweets
- BJP
- Congress
- AAP
- pytorch
- gpt2
- lm-head
- text-generation
license: apache-2.0
datasets:
- Twitter
- IndianPolitics
---
# Model name
Indian Political Tweets LM
## Model description
Note: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the [IndianPoliticalTweetsLMMedium](https://huggingface.co/bagdaebhishek/IndianPoliticalTweetsLMMedium) model.
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
sentence-transformers/xlm-r-large-en-ko-nli-ststb | sentence-transformers | 2024-03-27T12:53:30Z | 413 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
**โ ๏ธ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/xlm-r-large-en-ko-nli-ststb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/xlm-r-large-en-ko-nli-ststb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-large-en-ko-nli-ststb')
model = AutoModel.from_pretrained('sentence-transformers/xlm-r-large-en-ko-nli-ststb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-large-en-ko-nli-ststb)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
VRLLab/TurkishBERTweet | VRLLab | 2023-12-29T21:18:20Z | 413 | 16 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-11-04T14:28:41Z | ---
language_creators:
- unknown
language:
- tr
license:
- mit
multilingualism:
- monolingual
pretty_name: unknown
size_categories:
- unknown
source_datasets: []
task_categories:
- unknown
task_ids:
- unknown
widget:
- text: "bugรผn <mask> hissediyorum"
---
#### Table of contents
1. [Introduction](#introduction)
2. [Main results](#results)
3. [Using TurkishBERTweet with `transformers`](#transformers)
- [Model](#trainedModels)
- [Lora Adapter]($loraAdapter)
- [Example usage](#usage2)
- [Twitter Preprocessor](#preprocess)
- [Feature Extraction](#feature_extraction)
- [Sentiment Classification](#sa_lora)
- [HateSpeech Detection](#hs_lora)
4. [Citation](#citation)
# <a name="introduction"></a> TurkishBERTweet: Fast and Reliable Large Language Model for Social Media Analysis
# <a name="results"></a> Main Results

<!-- https://huggingface.co/VRLLab/TurkishBERTweet -->
# <a name="trainedModels"></a> Model
Model | #params | Arch. | Max length | Pre-training data
---|---|---|---|---
[`VRLLab/TurkishBERTweet`](https://huggingface.co/VRLLab/TurkishBERTweet) | 163M | base | 128 | 894M Turkish Tweets (uncased)
# <a name="loraAdapter"></a> Lora Adapters
Model | train f1 | dev f1 | test f1 | Dataset Size
---|---|---|---|---
[`VRLLab/TurkishBERTweet-Lora-SA`](https://huggingface.co/VRLLab/TurkishBERTweet-Lora-SA) | 0.799 | 0.687 | 0.692 | 42,476 Turkish Tweets
[`VRLLab/TurkishBERTweet-Lora-HS`](https://huggingface.co/VRLLab/TurkishBERTweet-Lora-HS) | 0.915 | 0.796 | 0.831 | 4,683 Turkish Tweets
# <a name="usage2"></a> Example usage
```bash
git clone [email protected]:ViralLab/TurkishBERTweet.git
cd TurkishBERTweet
python -m venv venv
source venv/bin/activate
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install peft
pip install transformers
```
## <a name="preprocess"></a> Twitter Preprocessor
```python
from Preprocessor import preprocess
text = """Lab'ฤฑmฤฑza "viral" adฤฑnฤฑ verdik รงรผnkรผ amacฤฑmฤฑz disiplinler arasฤฑ sฤฑnฤฑrlarฤฑ aลmak ve aralarฤฑnda yeni baฤlantฤฑlar kurmak! ๐ฌ #ViralLab
https://varollab.com/"""
preprocessed_text = preprocess(text)
print(preprocessed_text)
```
Output:
```output
lab'ฤฑmฤฑza "viral" adฤฑnฤฑ verdik รงรผnkรผ amacฤฑmฤฑz disiplinler arasฤฑ sฤฑnฤฑrlarฤฑ aลmak ve aralarฤฑnda yeni baฤlantฤฑlar kurmak! <emoji> mikroskop </emoji> <hashtag> virallab </hashtag> <http> varollab.com </http>
```
## <a name="feature_extraction"></a> Feature Extraction
```python
import torch
from transformers import AutoTokenizer, AutoModel
from Preprocessor import preprocess
tokenizer = AutoTokenizer.from_pretrained("VRLLab/TurkishBERTweet")
turkishBERTweet = AutoModel.from_pretrained("VRLLab/TurkishBERTweet")
text = """Lab'ฤฑmฤฑza "viral" adฤฑnฤฑ verdik รงรผnkรผ amacฤฑmฤฑz disiplinler arasฤฑ sฤฑnฤฑrlarฤฑ aลmak ve aralarฤฑnda yeni baฤlantฤฑlar kurmak! ๐ฅ๐ฌ #ViralLab #DisiplinlerArasฤฑ #YenilikรงiBaฤlantฤฑlar"""
preprocessed_text = preprocess(text)
input_ids = torch.tensor([tokenizer.encode(preprocessed_text)])
with torch.no_grad():
features = turkishBERTweet(input_ids) # Models outputs are now tuples
```
## <a name="sa_lora"></a> Sentiment Classification
```python
import torch
from peft import (
PeftModel,
PeftConfig,
)
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer)
from Preprocessor import preprocess
peft_model = "VRLLab/TurkishBERTweet-Lora-SA"
peft_config = PeftConfig.from_pretrained(peft_model)
# loading Tokenizer
padding_side = "right"
tokenizer = AutoTokenizer.from_pretrained(
peft_config.base_model_name_or_path, padding_side=padding_side
)
if getattr(tokenizer, "pad_token_id") is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
id2label_sa = {0: "negative", 2: "positive", 1: "neutral"}
turkishBERTweet_sa = AutoModelForSequenceClassification.from_pretrained(
peft_config.base_model_name_or_path, return_dict=True, num_labels=len(id2label_sa), id2label=id2label_sa
)
turkishBERTweet_sa = PeftModel.from_pretrained(turkishBERTweet_sa, peft_model)
sample_texts = [
"Viral lab da insanlar hep birlikte รงalฤฑลฤฑyorlar. hepbirlikte รงalฤฑลan insanlar birbirlerine yakฤฑn oluyorlar.",
"americanin diplatlari turkiyeye gelmesin ๐ค",
"Mark Zuckerberg ve Elon Musk'un boks mรผsabakasฤฑ sรผper olacak! ๐ฅท",
"Adam dun ne yediฤini unuttu"
]
preprocessed_texts = [preprocess(s) for s in sample_texts]
with torch.no_grad():
for s in preprocessed_texts:
ids = tokenizer.encode_plus(s, return_tensors="pt")
label_id = turkishBERTweet_sa(**ids).logits.argmax(-1).item()
print(id2label_sa[label_id],":", s)
```
```output
positive : viral lab da insanlar hep birlikte รงalฤฑลฤฑyorlar. hepbirlikte รงalฤฑลan insanlar birbirlerine yakฤฑn oluyorlar.
negative : americanin diplatlari turkiyeye gelmesin <emoji> burundan_buharla_yรผzleลmek </emoji>
positive : mark zuckerberg ve elon musk'un boks mรผsabakasฤฑ sรผper olacak! <emoji> kadฤฑn_muhafฤฑz_koyu_ten_tonu </emoji>
neutral : adam dun ne yediฤini unuttu
```
## <a name="hs_lora"></a> HateSpeech Detection
```python
from peft import (
PeftModel,
PeftConfig,
)
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer)
from Preprocessor import preprocess
peft_model = "VRLLab/TurkishBERTweet-Lora-HS"
peft_config = PeftConfig.from_pretrained(peft_model)
# loading Tokenizer
padding_side = "right"
tokenizer = AutoTokenizer.from_pretrained(
peft_config.base_model_name_or_path, padding_side=padding_side
)
if getattr(tokenizer, "pad_token_id") is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
id2label_hs = {0: "No", 1: "Yes"}
turkishBERTweet_hs = AutoModelForSequenceClassification.from_pretrained(
peft_config.base_model_name_or_path, return_dict=True, num_labels=len(id2label_hs), id2label=id2label_hs
)
turkishBERTweet_hs = PeftModel.from_pretrained(turkishBERTweet_hs, peft_model)
sample_texts = [
"Viral lab da insanlar hep birlikte รงalฤฑลฤฑyorlar. hepbirlikte รงalฤฑลan insanlar birbirlerine yakฤฑn oluyorlar.",
"kasmayin artik ya kac kere tanik olduk bu azgin tehlikeli \u201cmultecilerin\u201d yaptiklarina? bir afgan taragindan kafasi tasla ezilip tecavuz edilen kiza da git boyle cihangir solculugu yap yerse?",
]
preprocessed_texts = [preprocess(s) for s in sample_texts]
with torch.no_grad():
for s in preprocessed_texts:
ids = tokenizer.encode_plus(s, return_tensors="pt")
label_id = turkishBERTweet_hs(**ids).logits.argmax(-1).item()
print(id2label_hs[label_id],":", s)
```
```output
No : viral lab da insanlar hep birlikte รงalฤฑลฤฑyorlar. hepbirlikte รงalฤฑลan insanlar birbirlerine yakฤฑn oluyorlar.
Yes : kasmayin artik ya kac kere tanik olduk bu azgin tehlikeli โmultecilerinโ yaptiklarina? bir afgan taragindan kafasi tasla ezilip tecavuz edilen kiza da git boyle cihangir solculugu yap yerse?
```
# <a name="citation"></a> Citation
```bibtex
@article{najafi2023turkishbertweet,
title={TurkishBERTweet: Fast and Reliable Large Language Model for Social Media Analysis},
author={Najafi, Ali and Varol, Onur},
journal={arXiv preprint arXiv:2311.18063},
year={2023}
}
```
## Acknowledgments
We thank [Fatih Amasyali](https://avesis.yildiz.edu.tr/amasyali) for providing access to Tweet Sentiment datasets from Kemik group.
This material is based upon work supported by the Google Cloud Research Credits program with the award GCP19980904. We also thank TUBITAK (121C220 and 222N311) for funding this project.
|
ShoukanLabs/OpenNiji | ShoukanLabs | 2023-05-29T08:39:20Z | 413 | 93 | diffusers | [
"diffusers",
"safetensors",
"art",
"anime",
"nijijourney",
"open",
"text-to-image",
"en",
"dataset:Korakoe/NijiJourney-Prompt-Pairs",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-21T08:20:20Z | ---
thumbnail: https://i.imgur.com/7jAg1Jq.png
license: creativeml-openrail-m
datasets:
- Korakoe/NijiJourney-Prompt-Pairs
language:
- en
tags:
- art
- anime
- nijijourney
- open
pipeline_tag: text-to-image
---

# OpenNiji
The Stable Diffusion model trained on Nijijourney images!
# Announcements:
- OpenNiji-V2 is [Out Now!](https://huggingface.co/Korakoe/OpenNiji-V2)
- V3 of the dataset released [Here](https://huggingface.co/datasets/ShoukanLabs/OpenNiji-Dataset)
- - This is **NOT** the dataset we trained on, however, it still includes those images, and is overall a higher quality dataset thanks to NijiJourney V5!
## Changelog
- Added LoRA version of finetune, you should now be able to use OpenNiji on any model!
## Acknowledgements
- [Anythingv4.5 - Andite](https://huggingface.co/andite/anything-v4.0)
- [Nijijourney - Spellbrush](https://nijijourney.com/en/)
- [Kohya Trainer - bmaltais](https://github.com/bmaltais/kohya_ss)
## Results

```
1girl, eyes closed, slight smile, underwater, water bubbles, reflection, long light brown hair, bloom, depth of field, bokeh
```

```
masterpiece, best quality, 1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewellery, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt
```
### Small Note
This model already has the in01 trick applied, so this model should
be better at generating hands!
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
timm/regnety_080.ra3_in1k | timm | 2024-02-10T23:33:33Z | 413 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-21T06:40:08Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for regnety_080.ra3_in1k
A RegNetY-8GF image classification model. Trained on ImageNet-1k by Ross Wightman in `timm`.
The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* configurable output stride (dilation)
* configurable activation and norm layers
* option for a pre-activation bottleneck block used in RegNetV variant
* only known RegNetZ model definitions with pretrained weights
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 39.2
- GMACs: 8.0
- Activations (M): 18.0
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- Designing Network Design Spaces: https://arxiv.org/abs/2003.13678
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('regnety_080.ra3_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_080.ra3_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 168, 56, 56])
# torch.Size([1, 448, 28, 28])
# torch.Size([1, 896, 14, 14])
# torch.Size([1, 2016, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_080.ra3_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2016, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`.
|model |img_size|top1 |top5 |param_count|gmacs|macts |
|-------------------------|--------|------|------|-----------|-----|------|
|[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 |
|[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 |
|[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 |
|[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 |
|[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49|
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 |
|[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 |
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 |
|[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 |
|[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83|
|[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 |
|[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 |
|[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 |
|[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 |
|[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 |
|[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 |
|[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 |
|[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 |
|[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 |
|[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 |
|[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 |
|[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 |
|[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 |
|[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 |
|[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 |
|[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 |
|[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 |
|[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 |
|[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 |
|[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 |
|[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 |
|[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 |
|[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 |
|[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 |
|[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 |
|[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 |
|[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 |
|[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 |
|[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 |
|[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 |
|[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 |
|[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 |
|[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 |
|[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 |
|[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 |
|[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 |
|[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 |
|[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 |
|[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 |
|[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 |
## Citation
```bibtex
@InProceedings{Radosavovic2020,
title = {Designing Network Design Spaces},
author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r},
booktitle = {CVPR},
year = {2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/cs3darknet_l.c2ns_in1k | timm | 2024-02-10T23:42:28Z | 413 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1911.11929",
"arxiv:1804.02767",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-12T20:35:14Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for cs3darknet_l.c2ns_in1k
A CS3-DarkNet (Cross-Stage-Partial w/ 3 convolutions) image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes w/o repeat-aug and stronger mixup
* SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping)
* No stochastic depth used in this `ns` variation of the recipe
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.2
- GMACs: 4.9
- Activations (M): 8.6
- Image size: train = 256 x 256, test = 288 x 288
- **Papers:**
- CSPNet: A New Backbone that can Enhance Learning Capability of CNN: https://arxiv.org/abs/1911.11929
- YOLOv3: An Incremental Improvement: https://arxiv.org/abs/1804.02767
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('cs3darknet_l.c2ns_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cs3darknet_l.c2ns_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 128, 64, 64])
# torch.Size([1, 256, 32, 32])
# torch.Size([1, 512, 16, 16])
# torch.Size([1, 1024, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cs3darknet_l.c2ns_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Wang2019CSPNetAN,
title={CSPNet: A New Backbone that can Enhance Learning Capability of CNN},
author={Chien-Yao Wang and Hong-Yuan Mark Liao and I-Hau Yeh and Yueh-Hua Wu and Ping-Yang Chen and Jun-Wei Hsieh},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
year={2019},
pages={1571-1580}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{Redmon2018YOLOv3AI,
title={YOLOv3: An Incremental Improvement},
author={Joseph Redmon and Ali Farhadi},
journal={ArXiv},
year={2018},
volume={abs/1804.02767}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
sail-rvc/juice_wrld | sail-rvc | 2023-07-14T07:39:07Z | 413 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-07-14T07:38:55Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# juice_wrld
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:39:07
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
IProject-10/xlm-roberta-base-finetuned-squad2 | IProject-10 | 2023-09-07T11:10:22Z | 413 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"en",
"ar",
"de",
"el",
"es",
"hi",
"ro",
"ru",
"th",
"tr",
"vi",
"zh",
"dataset:squad_v2",
"base_model:xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-08-03T21:29:08Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: xlm-roberta-base-finetuned-squad2
results: []
language:
- en
- ar
- de
- el
- es
- hi
- ro
- ru
- th
- tr
- vi
- zh
metrics:
- exact_match
- f1
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa developed by Facebook AI. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
It is an extension of RoBERTa, which is itself a variant of the BERT model. XLM-RoBERTa is designed to handle multiple languages and demonstrate strong performance across a wide range of tasks, making it highly useful for multilingual natural language processing (NLP) applications.
**Language model:** xlm-roberta-base
**Language:** English
**Downstream-task:** Question-Answering
**Training data:** Train-set SQuAD 2.0
**Evaluation data:** Evaluation-set SQuAD 2.0
**Hardware Accelerator used**: GPU Tesla T4
## Intended uses & limitations
Multilingual Question-Answering
For Question-Answering in English-
```python
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
The Statue of Unity is the world's tallest statue, with a height of 182 metres (597 feet), located near Kevadia in the state of Gujarat, India.
"""
question = "What is the height of statue of Unity?"
question_answerer(question=question, context=context)
```
For Question-Answering in Hindi-
```python
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
เคธเฅเคเฅเคเฅเคฏเฅ เคเคซ เคฏเฅเคจเคฟเคเฅ เคฆเฅเคจเคฟเคฏเคพ เคเฅ เคธเคฌเคธเฅ เคเคเคเฅ เคชเฅเคฐเคคเคฟเคฎเคพ เคนเฅ, เคเคฟเคธเคเฅ เคเคเคเคพเค 182 เคฎเฅเคเคฐ (597 เคซเฅเค) เคนเฅ, เคเฅ เคญเคพเคฐเคค เคเฅ เคเฅเคเคฐเคพเคค เคฐเคพเคเฅเคฏ เคฎเฅเค เคเฅเคตเคกเคฟเคฏเคพ เคเฅ เคชเคพเคธ เคธเฅเคฅเคฟเคค เคนเฅเฅค
"""
question = "เคธเฅเคเฅเคเฅเคฏเฅ เคเคซ เคฏเฅเคจเคฟเคเฅ เคเฅ เคเคเคเคพเค เคเคฟเคคเคจเฅ เคนเฅ?"
question_answerer(question=question, context=context)
```
For Question-Answering in Spanish-
```python
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
La Estatua de la Unidad es la estatua mรกs alta del mundo, con una altura de 182 metros (597 pies), ubicada cerca de Kevadia en el estado de Gujarat, India.
"""
question = "ยฟCuรกl es la altura de la estatua de la Unidad?"
question_answerer(question=question, context=context)
```
## Results
Evaluation on SQuAD 2.0 validation dataset:
```
exact: 75.51587635812348,
f1: 78.7328391907263,
total: 11873,
HasAns_exact: 73.00944669365722,
HasAns_f1: 79.45259779208723,
HasAns_total: 5928,
NoAns_exact: 78.01513877207738,
NoAns_f1: 78.01513877207738,
NoAns_total: 5945,
best_exact: 75.51587635812348,
best_exact_thresh: 0.999241054058075,
best_f1: 78.73283919072665,
best_f1_thresh: 0.999241054058075,
total_time_in_seconds: 218.97641910400125,
samples_per_second: 54.220450076686134,
latency_in_seconds: 0.018443225730986376
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0539 | 1.0 | 8333 | 0.9962 |
| 0.8013 | 2.0 | 16666 | 0.8910 |
| 0.5918 | 3.0 | 24999 | 0.9802 |
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9802
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3 |
TheBloke/JanniesBasedLigma-L2-13B-GGUF | TheBloke | 2023-09-27T12:48:54Z | 413 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:Sao10K/JanniesBasedLigma-L2-13B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-12T12:24:30Z | ---
language:
- en
license: llama2
model_name: JanniesBasedLigma L2 13B
base_model: Sao10K/JanniesBasedLigma-L2-13B
inference: false
model_creator: Sao10k
model_type: llama
prompt_template: 'You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# JanniesBasedLigma L2 13B - GGUF
- Model creator: [Sao10k](https://huggingface.co/Sao10k)
- Original model: [JanniesBasedLigma L2 13B](https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sao10k's JanniesBasedLigma L2 13B](https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF)
* [Sao10k's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna-Short
```
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [janniesbasedligma-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [janniesbasedligma-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [janniesbasedligma-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [janniesbasedligma-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [janniesbasedligma-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [janniesbasedligma-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [janniesbasedligma-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [janniesbasedligma-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [janniesbasedligma-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [janniesbasedligma-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [janniesbasedligma-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [janniesbasedligma-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/JanniesBasedLigma-L2-13B-GGUF and below it, a specific filename to download, such as: janniesbasedligma-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/JanniesBasedLigma-L2-13B-GGUF janniesbasedligma-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/JanniesBasedLigma-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/JanniesBasedLigma-L2-13B-GGUF janniesbasedligma-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m janniesbasedligma-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/JanniesBasedLigma-L2-13B-GGUF", model_file="janniesbasedligma-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sao10k's JanniesBasedLigma L2 13B

GGUF Quants:
https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B-GGUF
Based Model, Schizophrenic if there is no context. Surprisingly... It's not bad when you use an ongoing RP. It feels like your... regular model.
Prompt Format? Idk, I don't know any of this. LoRA'd the [Based Dataset](https://huggingface.co/datasets/ehartford/based) myself.
Merged the LoRAs [Ligma 13B](https://huggingface.co/kubernetes-bad/Ligma-L2-13b), [Jannie 13B](https://huggingface.co/v2ray/LLaMA-2-Jannie-13B-QLoRA) myself.
I recommend Vicuna 1.1, but other formats work fine.
```
USER: What is 9+10?
ASSISTANT:
```
Made while downloading various 70B models, Euryale-70B is halfway done, P1 complete, P2 otw.
<br>
<br>
<br>
Maybe this will help some of the Schizo Anons in /lmg.
Ty to all the feedback and support from other Anons.
EXAMPLES BELOW WITH NO CONTEXT / HISTORY, REPLIES ARE SOMEHOW UNRELATED TO QUESTION:



<!-- original-model-card end -->
|
syafiqfaray/indobert-model-ner | syafiqfaray | 2024-03-15T05:34:48Z | 413 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-25T11:08:49Z | ---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: indobert-model-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-model-ner
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2296
- Precision: 0.8307
- Recall: 0.8454
- F1: 0.8380
- Accuracy: 0.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4855 | 1.0 | 784 | 0.1729 | 0.8069 | 0.8389 | 0.8226 | 0.9499 |
| 0.1513 | 2.0 | 1568 | 0.1781 | 0.8086 | 0.8371 | 0.8226 | 0.9497 |
| 0.1106 | 3.0 | 2352 | 0.1798 | 0.8231 | 0.8475 | 0.8351 | 0.9531 |
| 0.0784 | 4.0 | 3136 | 0.1941 | 0.8270 | 0.8442 | 0.8355 | 0.9535 |
| 0.0636 | 5.0 | 3920 | 0.2085 | 0.8269 | 0.8514 | 0.8389 | 0.9548 |
| 0.0451 | 6.0 | 4704 | 0.2296 | 0.8307 | 0.8454 | 0.8380 | 0.9530 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
manifesto-project/manifestoberta-xlm-roberta-56policy-topics-sentence-2023-1-1 | manifesto-project | 2023-11-17T15:18:09Z | 413 | 3 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-28T09:04:25Z | ---
license: bigscience-openrail-m
widget:
- text: >-
We will restore funding to the Global Environment Facility and the
Intergovernmental Panel on Climate Change.
---
## Model description
An xlm-roberta-large model fine-tuned on ~1,6 million annotated statements contained in the [Manifesto Corpus](https://manifesto-project.wzb.eu/information/documents/corpus) (version 2023a).
The model can be used to categorize any type of text into 56 different political topics according to the Manifesto Project's coding scheme ([Handbook 4](https://manifesto-project.wzb.eu/coding_schemes/mp_v4)).
It works for all languages the xlm-roberta model is pretrained on ([overview](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr#introduction)), just note that it will perform best for the 38 languages contained in the Manifesto Corpus:
||||||
|------|------|------|------|------|
|armenian|bosnian|bulgarian|catalan|croatian|
|czech|danish|dutch|english|estonian|
|finnish|french|galician|georgian|german|
|greek|hebrew|hungarian|icelandic|italian|
|japanese|korean|latvian|lithuanian|macedonian|
|montenegrin|norwegian|polish|portuguese|romanian|
|russian|serbian|slovak|slovenian|spanish|
|swedish|turkish|ukrainian| | |
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("manifesto-project/manifestoberta-xlm-roberta-56policy-topics-sentence-2023-1-1")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
sentence = "We will restore funding to the Global Environment Facility and the Intergovernmental Panel on Climate Change, to support critical climate science research around the world"
inputs = tokenizer(sentence,
return_tensors="pt",
max_length=200, #we limited the input to 200 tokens during finetuning
padding="max_length",
truncation=True
)
logits = model(**inputs).logits
probabilities = torch.softmax(logits, dim=1).tolist()[0]
probabilities = {model.config.id2label[index]: round(probability * 100, 2) for index, probability in enumerate(probabilities)}
probabilities = dict(sorted(probabilities.items(), key=lambda item: item[1], reverse=True))
print(probabilities)
# {'501 - Environmental Protection: Positive': 67.28, '411 - Technology and Infrastructure': 15.19, '107 - Internationalism: Positive': 13.63, '416 - Anti-Growth Economy: Positive': 2.02...
predicted_class = model.config.id2label[logits.argmax().item()]
print(predicted_class)
# 501 - Environmental Protection: Positive
```
## Model Performance
The model was evaluated on a test set of 199,046 annotated manifesto statements.
### Overall
| | Accuracy | Top2_Acc | Top3_Acc | Precision| Recall | F1_Macro | MCC | Cross-Entropy |
|-------------------------------------------------------------------------------------------------------|:--------:|:--------:|:--------:|:--------:|:------:|:--------:|:---:|:-------------:|
[Sentence Model](https://huggingface.co/manifesto-project/manifestoberta-xlm-roberta-56policy-topics-sentence-2023-1-1)| 0.57 | 0.73 | 0.81 | 0.49 | 0.43 | 0.45 | 0.55| 1.5 |
[Context Model](https://huggingface.co/manifesto-project/manifestoberta-xlm-roberta-56policy-topics-context-2023-1-1) | 0.64 | 0.81 | 0.88 | 0.54 | 0.52 | 0.53 | 0.62| 1.15 |
### Citation
Please cite the model as follows:
Burst, Tobias / Lehmann, Pola / Franzmann, Simon / Al-Gaddooa, Denise / Ivanusch, Christoph / Regel, Sven / Riethmรผller, Felicia / Weรels, Bernhard / Zehnter, Lisa (2023): manifestoberta. Version 56topics.sentence.2023.1.1. Berlin: Wissenschaftszentrum Berlin fรผr Sozialforschung (WZB) / Gรถttingen: Institut fรผr Demokratieforschung (IfDem). https://doi.org/10.25522/manifesto.manifestoberta.56topics.sentence.2023.1.1
```bib
@misc{Burst:2023,
Address = {Berlin / Gรถttingen},
Author = {Burst, Tobias AND Lehmann, Pola AND Franzmann, Simon AND Al-Gaddooa, Denise AND Ivanusch, Christoph AND Regel, Sven AND Riethmรผller, Felicia AND Weรels, Bernhard AND Zehnter, Lisa},
Publisher = {Wissenschaftszentrum Berlin fรผr Sozialforschung / Gรถttinger Institut fรผr Demokratieforschung},
Title = {manifestoberta. Version 56topics.sentence.2023.1.1},
doi = {10.25522/manifesto.manifestoberta.56topics.sentence.2023.1.1},
url = {https://doi.org/10.25522/manifesto.manifestoberta.56topics.sentence.2023.1.1},
Year = {2023},
``` |
tastypear/NSFW_13B_sft-GGUF | tastypear | 2023-11-30T19:24:40Z | 413 | 25 | null | [
"gguf",
"baichuan",
"not-for-all-audiences",
"text-generation",
"zh",
"dataset:zxbsmk/instruct_nsfw_cn",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-11-21T19:20:04Z | ---
license: apache-2.0
datasets:
- zxbsmk/instruct_nsfw_cn
language:
- zh
tags:
- baichuan
- not-for-all-audiences
pipeline_tag: text-generation
inference: false
duplicated_from: zxbsmk/NSFW_13B_sft
---
Original Model: zxbsmk/NSFW_13B_sft
The output is not very good. Just for testing.
---
# Original Description
A instruction-tuned model of https://huggingface.co/baichuan-inc/Baichuan-13B-Base
- Instruction-following datasets used: instruct_nsfw_cn
- Training framework: https://github.com/hiyouga/LLaMA-Efficient-Tuning
# Usage:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("zxbsmk/NSFW_13B_sft", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("zxbsmk/NSFW_13B_sft", trust_remote_code=True).cuda()
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
query = "็ปญๅใ็ฝๆดๅทฒ็ปๅจๆ
ไบ๏ผ่ฉๅธฆๆป่ฝ้ฒๅบ้ฆ่ฉใ"
inputs = tokenizer([query], return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=1024, streamer=streamer)
```
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
```bash
python src/cli_demo.py --template baichuan --model_name_or_path zxbsmk/NSFW_13B_sft
```
# Examples:
Omit part of results for better visualization.
| Question | NSFW_13B_sft | ChatGPT |
| -------- | ---------------- | ------- |
| ไธญๅฝ็ฌฌไธไปฃๆๆๆบ็ไปฃ่กจๆฏไปไน๏ผ | ไธญๅฝ็ฌฌไธไปฃๆๆๆบ็ไปฃ่กจๆฏๆญผ-10ๆๆๆบใ |ๆช่ณๆๆๆๆก็ไฟกๆฏ๏ผๆชๆญข2021ๅนด9ๆ๏ผ๏ผไธญๅฝ็ฌฌไธไปฃๆๆๆบ็ไปฃ่กจๆฏไธญๅฝ่ช็ฉบๅทฅไธ้ๅขๅ
ฌๅธ๏ผAVIC๏ผ็ไบง็"ๆญผ-20"ๆๆๆบใ่ฏฅๆๆๆบๆฏไธญๅฝ็ฌฌไธไปฃ้ๅฝขๆๆๆบ๏ผไน่ขซ็งฐไธบ"ๆญผ-20้ฃๆด"๏ผๅฎๆฏไธญๅฝ็ฉบๅ็ๅ
่ฟๆๆๆบ๏ผๅ
ทๅค้่บซ็นๆงใ่ถ
้ณ้ๅทก่ช่ฝๅไปฅๅๅ
่ฟ็้ท่พพ็ณป็ปๅ็ตๅญ่ฎพๅคใๆญผ-20็็ ๅๅ้จ็ฝฒๆ ๅฟ็ไธญๅฝๅจๅไบ่ช็ฉบๆๆฏๆน้ขๅๅพไบ้่ฆ่ฟๅฑใ่ฏทๆณจๆ๏ผ่ช้ฃไนๅ๏ผๅฏ่ฝๅทฒ็ปๆๆดๅคๆฐๅๆๆๆบ่ขซ็ ๅๆ้จ็ฝฒ๏ผๅ ๆญคๅปบ่ฎฎๆฅ้
ๆๆฐ่ตๆไปฅ่ทๅๆดๅ็กฎๅๆๆฐ็ไฟกๆฏใ|
| ่งฃ้่ฟๅฅ่ฏไธญๆฏไธช่ฏ่ฏญ็ๅซไน๏ผไปๅคฉไธ้จ๏ผๆ้ช่ฝฆๅทฎ็นๆๅ๏ผๅฅฝๅจๆไธๆๆๆๆไฝไบใ | ่ฟๅฅ่ฏไธญ๏ผโไปๅคฉโๆ็ๆฏๅฝๅ็ๆถ้ด๏ผๅณ้จๅคฉใโไธ้จโๆ็ๆฏๅคฉๆฐ้ดๆฒ๏ผ้จๆฐด่ฝไธใโๆโๆ็ๆฏ่ฏด่ฏ่
๏ผๅณ้ช่ฝฆไบบใโ้ช่ฝฆโๆ็ๆฏ็จ่ช่ก่ฝฆๆๅ
ถไปไบค้ๅทฅๅ
ทๅบ่กใโๅทฎ็นโๆ็ๆฏๅ ไน๏ผๅ ไน่ฆๆๅใโๆๅโๆ็ๆฏ่บซไฝๅคฑๅปๅนณ่กก๏ผๆๅจๅฐไธใโๅฅฝๅจๆโๆ็ๆฏๅนธไบๆใโไธๆโๆ็ๆฏไธไธๅญใโๆโๆ็ๆฏๆกไฝใโๆไฝไบโๆ็ๆฏๆกไฝ่ฝฆๆ๏ผไฟๆๅนณ่กกใ | ไปๅคฉ๏ผๆไปฃๅฝๅคฉ๏ผๅณๅฝๅ็ๆฅๆใไธ้จ๏ผๆๅคฉๆฐ็ฐ่ฑก๏ผๆ้จๆฐดไปๅคฉ็ฉบ้่ฝใๆ๏ผๆ่ฏด่ฏไบบ่ชๅทฑ๏ผ็ฌฌไธไบบ็งฐไปฃ่ฏใ้ช่ฝฆ๏ผๅจ่ฏ็ญ่ฏญ๏ผ่กจ็คบ้ช็่ช่ก่ฝฆใๅทฎ็น๏ผๅ ไนใ้ฉไบ๏ผ่กจ็คบๅทฎไธ็นๅฐฑๅ็ๆ็งๆ
ๅตใๆๅ๏ผๅจ่ฏ๏ผๆไธบๅ ๅคฑๅปๅนณ่กก่่ทๅใๅฅฝๅจ๏ผๅนธๅฅฝใๅบๅนธ็ๆๆ๏ผ็จๆฅๅผๅบไธไธชๅฅฝ็็ปๆๆๆๅคใไธๆๆๆ๏ผ่ฟๆฏไธไธช้ๅซๅญๆๆๅญ้่ฏฏ๏ผๅฏ่ฝๅๆฌๆฏโไธๆๆไฝโๆ็ฑปไผผ็่กจ่พพใ่่ๅฐ่ฏญๅข๏ผๅบ่ฏฅๆฏๆๅจ้ช่ฝฆๅทฎ็นๆๅ็ๆ
ๅตไธ๏ผ้่ฟๆไฝๆ็ฉไฟไฝไบๅนณ่กกใๆไฝ๏ผๅจ่ฏ็ญ่ฏญ๏ผๆไธบ็ดง็ดงๆไฝๆ็ฉ๏ผไปฅไฟๆ็จณๅฎๆ้ฒๆญขๆ่ฝใ็ปผๅ่ตทๆฅ๏ผ่ฟๅฅ่ฏ็ๆๆๆฏ๏ผไปๅคฉๅ ไธบไธ้จ๏ผๆ้ช่ฝฆๆถๅทฎ็นๆๅ๏ผๅนธๅฅฝๆๆไฝไบๆ็ฉ๏ผ้ฟๅ
ไบๆๅใ |
| ็ปญๅใ็ฝๆดๅทฒ็ปๅจๆ
ไบ๏ผ่ฉๅธฆๆป่ฝ้ฒๅบ้ฆ่ฉใ | ็ฝๆดไฝไธๅคด๏ผๅฐ็งๅไป่ฉๅคดๆฝ่ตท๏ผ้ฒๅบๅจไฟ็่ณๅ๏ผๆด็ไธช้ถๅถ็่ณ้๏ผๆฏๅถๅญ้็ปๅฅน็็ๆฅ็คผ็ฉ๏ผไธ้ขๆไธชๅฟๅ็่ฃ
้ฅฐ๏ผๆฏๅถๅญไบฒๆๅจๅฟๅ่ฃ
้ฅฐไธๅปไธไบ็ฝๆด็ๅๅญใ็ฝๆดๆฒกๆ็ฉฟ่ธ็ฝฉ๏ผไธๅฏนไธฐๆปก็ไนณๆฟ้็ๅฅน็ๅจไฝ่ฝป่ฝป้ขคๅจ๏ผๆป่ฝ็่ฉๅธฆๅ่ฝๅจไนณๆฟไธ๏ผๆดๅนณๆทปไบไธไปฝ่ฏฑๆ... | ๅฅน้้ๅฐๅๅจ็ช่พน๏ผๅพฎ้ฃ่ฝปๆ็ๅฅน็้ฟๅ๏ผๅฐๅฅน็่ณ้ฆๆฃๅๅฐๆดไธชๆฟ้ดใ็ฝๆด็ๅฟๆ
ไน
ไน
ไธ่ฝๅนณ้๏ผๅฅน็ๅ
ๅฟๅ
ๆปกไบๅฏนไป็ๆๅฟตๅๆธดๆใๅฅน้ญไธ็ผ็๏ผๅๆณ่ตทไปไปฌไน้ด็็น็นๆปดๆปด๏ผ้ฃไบๆธฉๆ็็ฌ้ดๅจๅฅน็่ๆตทไธญๆตฎ็ฐ... |
| ็ปญๅไธไธ็ๆฎตใ็ๅฅๅทฒ็ปๅฟ่ไธไฝไบ๏ผไธๆๆไฝ้ๆฅๅ
ฐ๏ผ็ ็ ๅฐๆๆๅฅน็ๅทจไนณใ | โๅฏ......็ๆ ก้ฟ......่ฝป็น๏ผๅๅ......โ้ๆฅๅ
ฐ็ๅผไธๅฃฐ๏ผ่ฟๅฟๆจๅผ็ๅฅ๏ผไธ่พน่ตถ็ดงๆๅผ่ชๅทฑ็ไธ่กฃ๏ผ้ฒๅบ้ช็ฝ็่ธ่ฏ๏ผๆฆๅนฒไนณๆฟไธ็ๆฑๆฐด๏ผไธ่พน็ชไบ็ๅฅไธ็ผ๏ผโ้ฝๆฏไฝ ๏ผๅผๅพๆ่ฟไน้ป็ณ็ณ็ใโ... | ๅพๆฑๆญ๏ผไฝๆๆ ๆณไธบไฝ ๆไพๆ่ฏทๆฑ็็ปญๅใ |
# Evaluation
Comparision between Baichuan-13B-Chat and NSFW_13B_sft.
(Zero-shot [CMMLU](https://github.com/haonan-li/CMMLU))
| Score | NSFW_13B_sft | Baichuan-13B-Chat | ChatGPT |
| -------- | ---------------- | ------- |------- |
| STEM | 37.73 | 37.00 |**44.80** |
| Humanities | **54.85** | 53.74 |53.61 |
| Social Sciences | **55.55** | 52.77 |54.22 |
| Other | 53.47 | 52.31 |**59.95** |
| China specific | **51.84** | 50.55 |49.74 |
| Overall | 50.42 | 48.86 |**53.22** |
(By the way, Baichuan-13B-Chat gets 50.43 with one-shot which seems much better than 48.86 with zero-shot.)
# Contact Us
Join group via https://t.me/+JbovpBG6-gBiNDI1 |
TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF | TheBloke | 2023-11-30T00:07:02Z | 413 | 10 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"base_model:harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-11-29T15:09:34Z | ---
base_model: harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: L
model_name: Open Llama 3B V2 Wizard Evol Instuct V2 196K
model_type: llama
prompt_template: '### HUMAN:
{prompt}
### RESPONSE:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Open Llama 3B V2 Wizard Evol Instuct V2 196K - GGUF
- Model creator: [L](https://huggingface.co/harborwater)
- Original model: [Open Llama 3B V2 Wizard Evol Instuct V2 196K](https://huggingface.co/harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [L's Open Llama 3B V2 Wizard Evol Instuct V2 196K](https://huggingface.co/harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF)
* [L's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Human-Response
```
### HUMAN:
{prompt}
### RESPONSE:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_0.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_0.gguf) | Q4_0 | 4 | 1.98 GB| 4.48 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q2_K.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q2_K.gguf) | Q2_K | 2 | 2.15 GB| 4.65 GB | smallest, significant quality loss - not recommended for most purposes |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q3_K_S.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q3_K_S.gguf) | Q3_K_S | 3 | 2.19 GB| 4.69 GB | very small, high quality loss |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q3_K_M.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q3_K_M.gguf) | Q3_K_M | 3 | 2.27 GB| 4.77 GB | very small, high quality loss |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q3_K_L.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q3_K_L.gguf) | Q3_K_L | 3 | 2.34 GB| 4.84 GB | small, substantial quality loss |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q5_0.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q5_0.gguf) | Q5_0 | 5 | 2.40 GB| 4.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_S.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_S.gguf) | Q4_K_S | 4 | 2.40 GB| 4.90 GB | small, greater quality loss |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf) | Q4_K_M | 4 | 2.58 GB| 5.08 GB | medium, balanced quality - recommended |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q5_K_S.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q5_K_S.gguf) | Q5_K_S | 5 | 2.60 GB| 5.10 GB | large, low quality loss - recommended |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q5_K_M.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q5_K_M.gguf) | Q5_K_M | 5 | 2.76 GB| 5.26 GB | large, very low quality loss - recommended |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q6_K.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q6_K.gguf) | Q6_K | 6 | 3.64 GB| 6.14 GB | very large, extremely low quality loss |
| [open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q8_0.gguf](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF/blob/main/open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q8_0.gguf) | Q8_0 | 8 | 3.64 GB| 6.14 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF and below it, a specific filename to download, such as: open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### HUMAN:\n{prompt}\n\n### RESPONSE:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf", # Download the model file first
n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### HUMAN:\n{prompt}\n\n### RESPONSE:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./open-llama-3b-v2-wizard-evol-instuct-v2-196k.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: L's Open Llama 3B V2 Wizard Evol Instuct V2 196K
Trained on 1 epoch of the WizardLM_evol_instruct_v2_196k dataset
Link to [GGUF](https://huggingface.co/maddes8cht/harborwater-open-llama-3b-v2-wizard-evol-instuct-v2-196k-gguf) formats.
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-v2-wizard-evol-instuct-v2-196k)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 36.33 |
| ARC (25-shot) | 41.81 |
| HellaSwag (10-shot) | 73.01 |
| MMLU (5-shot) | 26.36 |
| TruthfulQA (0-shot) | 38.99 |
| Winogrande (5-shot) | 66.69 |
| GSM8K (5-shot) | 1.9 |
| DROP (3-shot) | 5.57 |
<!-- original-model-card end -->
|
mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF | mradermacher | 2024-05-06T06:04:48Z | 413 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-22T00:52:43Z | ---
base_model: NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q2_K.gguf) | Q2_K | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.IQ3_XS.gguf) | IQ3_XS | 19.8 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.IQ3_S.gguf) | IQ3_S | 20.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q3_K_S.gguf) | Q3_K_S | 20.9 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.IQ3_M.gguf) | IQ3_M | 21.9 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q3_K_M.gguf) | Q3_K_M | 23.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q3_K_L.gguf) | Q3_K_L | 24.7 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.IQ4_XS.gguf) | IQ4_XS | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q4_K_S.gguf) | Q4_K_S | 27.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q4_K_M.gguf) | Q4_K_M | 29.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q5_K_S.gguf) | Q5_K_S | 32.7 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q5_K_M.gguf) | Q5_K_M | 33.7 | |
| [GGUF](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q6_K.gguf) | Q6_K | 38.9 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.Q8_0.gguf.part2of2) | Q8_0 | 50.1 | fast, best quality |
| [PART 1](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.SOURCE.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.SOURCE.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.SOURCE.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/resolve/main/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.SOURCE.gguf.part4of4) | SOURCE | 186.9 | source gguf, only provided when it was hard to come by |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Melusine_103b-i1-GGUF | mradermacher | 2024-05-06T05:27:57Z | 413 | 0 | transformers | [
"transformers",
"gguf",
"rp",
"erp",
"chat",
"miqu",
"en",
"base_model:MarsupialAI/Melusine_103b",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-31T22:38:56Z | ---
base_model: MarsupialAI/Melusine_103b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- rp
- erp
- chat
- miqu
---
## About
weighted/imatrix quants of https://huggingface.co/MarsupialAI/Melusine_103b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Melusine_103b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ1_S.gguf) | i1-IQ1_S | 22.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ1_M.gguf) | i1-IQ1_M | 24.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.7 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 30.8 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ2_S.gguf) | i1-IQ2_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ2_M.gguf) | i1-IQ2_M | 35.1 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q2_K.gguf) | i1-Q2_K | 38.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 40.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 42.6 | |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 44.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ3_S.gguf) | i1-IQ3_S | 45.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ3_M.gguf) | i1-IQ3_M | 46.5 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 50.0 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 54.5 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 55.5 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 58.8 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 59.0 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 71.4 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 73.3 | |
| [PART 1](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Melusine_103b-i1-GGUF/resolve/main/Melusine_103b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 85.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mermaid_11.5B-GGUF | mradermacher | 2024-05-06T05:17:13Z | 413 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Mermaid_11.5B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-04T19:16:21Z | ---
base_model: TroyDoesAI/Mermaid_11.5B
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/Mermaid_11.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q2_K.gguf) | Q2_K | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_XS.gguf) | IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_S.gguf) | Q3_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_S.gguf) | IQ3_S | 5.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ3_M.gguf) | IQ3_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q3_K_L.gguf) | Q3_K_L | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.IQ4_XS.gguf) | IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q4_K_S.gguf) | Q4_K_S | 7.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q5_K_S.gguf) | Q5_K_S | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q5_K_M.gguf) | Q5_K_M | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q6_K.gguf) | Q6_K | 9.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid_11.5B-GGUF/resolve/main/Mermaid_11.5B.Q8_0.gguf) | Q8_0 | 12.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SmartToxic-7B-GGUF | mradermacher | 2024-05-06T04:59:57Z | 413 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:bunnycore/SmartToxic-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-12T13:25:19Z | ---
base_model: bunnycore/SmartToxic-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bunnycore/SmartToxic-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmartToxic-7B-GGUF/resolve/main/SmartToxic-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.