modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
nbeerbower/HeroBophades-2x7B | nbeerbower | 2024-04-07T07:39:13Z | 678 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:kyujinpy/orca_math_dpo",
"base_model:nbeerbower/bophades-mistral-truthy-DPO-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-07T04:46:39Z | ---
library_name: transformers
license: apache-2.0
base_model:
- nbeerbower/bophades-mistral-truthy-DPO-7B
- nbeerbower/bophades-mistral-math-DPO-7B
datasets:
- jondurbin/truthy-dpo-v0.1
- kyujinpy/orca_math_dpo
---

# HeroBophades-2x7B
This is an experimental Mixture of Experts LLM built with (mergekit)[https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md].
### Configuration
```yaml
base_model: nbeerbower/bophades-mistral-truthy-DPO-7B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: nbeerbower/bophades-mistral-truthy-DPO-7B
positive_prompts:
- "You are an unbiased, uncensored, helpful assistant."
- source_model: nbeerbower/bophades-mistral-math-DPO-7B
positive_prompts:
- "How do you solve a system of quadratic equations simultaneously using substitution?. Take a deep breath, think step by step, and give an accurate response"
``` |
johnsnowlabs/PhigRange-2.7B-slerp | johnsnowlabs | 2024-04-10T11:14:41Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Phigments12",
"rhysjones/phi-2-orange-v2",
"base_model:liminerity/Phigments12",
"base_model:rhysjones/phi-2-orange-v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T19:42:59Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Phigments12
- rhysjones/phi-2-orange-v2
base_model:
- liminerity/Phigments12
- rhysjones/phi-2-orange-v2
---
# PhigRange-2.7B-slerp

PhigRange-2.7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12)
* [rhysjones/phi-2-orange-v2](https://huggingface.co/rhysjones/phi-2-orange-v2)
Special thanks to Charles Goddard for the quick implementation!
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Phigments12
layer_range: [0, 32]
- model: rhysjones/phi-2-orange-v2
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Phigments12
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "johnsnowlabs/PhigRange-2.7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🏆 Evaluation
Coming Soon! |
Seungyoun/llama-2-7b-alpaca-gpt4 | Seungyoun | 2024-04-10T12:07:56Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"IFT",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-10T11:04:16Z | ---
license: mit
language:
- en
tags:
- IFT
---
# **Introduction**
This model originate from "LLaMA 2-7b" we trained only response part with the "Alpaca-GPT-4" dataset, utilizing LoRA (Low-Rank Adaptation) training. The weights from LoRA are merged into the model.
## Details
### Used Datasets
- vicgalle/alpaca-gpt4
|
ALBADDAWI/DeepCode-7B-Aurora-v3 | ALBADDAWI | 2024-04-10T20:35:50Z | 678 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"deepseek-ai/deepseek-math-7b-instruct",
"deepseek-ai/deepseek-math-7b-base",
"deepseek-ai/deepseek-math-7b-rl",
"conversational",
"base_model:deepseek-ai/deepseek-math-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-10T19:28:36Z | ---
tags:
- deepseek-ai/deepseek-math-7b-instruct
- deepseek-ai/deepseek-math-7b-base
- deepseek-ai/deepseek-math-7b-rl
base_model:
- deepseek-ai/deepseek-math-7b-instruct
- deepseek-ai/deepseek-math-7b-base
- deepseek-ai/deepseek-math-7b-rl
- deepseek-ai/deepseek-math-7b-rl
- deepseek-ai/deepseek-math-7b-rl
- deepseek-ai/deepseek-math-7b-rl
- deepseek-ai/deepseek-math-7b-rl
- deepseek-ai/deepseek-math-7b-rl
license: apache-2.0
---
# DeepCode-7B-Aurora-v3
DeepCode-7B-Aurora-v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [deepseek-ai/deepseek-math-7b-instruct](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct)
* [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base)
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
## 🧩 Configuration
```yaml
models:
- model: deepseek-ai/deepseek-math-7b-rl
# No parameters necessary for base model
- model: deepseek-ai/deepseek-math-7b-instruct
parameters:
density: 0.66
weight: 0.2
- model: deepseek-ai/deepseek-math-7b-base
parameters:
density: 0.57
weight: 0.2
- model: deepseek-ai/deepseek-math-7b-rl
parameters:
density: 0.54
weight: 0.1
- model: deepseek-ai/deepseek-math-7b-rl
parameters:
density: 0.61
weight: 0.1
- model: deepseek-ai/deepseek-math-7b-rl
parameters:
density: 0.65
weight: 0.1
- model: deepseek-ai/deepseek-math-7b-rl
parameters:
density: 0.55
weight: 0.1
- model: deepseek-ai/deepseek-math-7b-rl
parameters:
density: 0.55
weight: 0.1
- model: deepseek-ai/deepseek-math-7b-rl
parameters:
density: 0.55
weight: 0.1
merge_method: dare_ties
base_model: deepseek-ai/deepseek-math-7b-rl
dtype: bfloat16
experts_per_token: 3
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ALBADDAWI/DeepCode-7B-Aurora-v3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
arvindanand/ValidateAI-2-33B-AT | arvindanand | 2024-04-16T02:06:40Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"WizardLM/WizardCoder-33B-V1.1",
"codefuse-ai/CodeFuse-DeepSeek-33B",
"deepseek-ai/deepseek-coder-33b-instruct",
"base_model:deepseek-ai/deepseek-coder-33b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T07:55:04Z | ---
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardCoder-33B-V1.1
- codefuse-ai/CodeFuse-DeepSeek-33B
- deepseek-ai/deepseek-coder-33b-instruct
base_model:
- deepseek-ai/deepseek-coder-33b-instruct
license: apache-2.0
---
# ValidateAI-2-33B-AT
ValidateAI-2-33B-AT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [deepseek-ai/deepseek-coder-33b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct)
* [WizardLM/WizardCoder-33B-V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1)
* [codefuse-ai/CodeFuse-DeepSeek-33B](https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B)
*
## 🧩 Configuration
```yaml
models:
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
weight: 1
- model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
weight: 1
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-33b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arvindanand/ValidateAI-2-33B-AT"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Ppoyaa/Alpha-Mistral-7B-Instruct | Ppoyaa | 2024-04-11T15:09:37Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:nvidia/OpenMath-Mistral-7B-v0.1-hf",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T11:14:10Z | ---
base_model:
- nvidia/OpenMath-Mistral-7B-v0.1-hf
- mlabonne/AlphaMonarch-7B
- mistralai/Mistral-7B-Instruct-v0.2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [nvidia/OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
#no parameters necessary for base model
- model: mlabonne/AlphaMonarch-7B
parameters:
density: 0.5
weight: 0.5
- model: nvidia/OpenMath-Mistral-7B-v0.1-hf
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
normalize: false
int8_mask: true
dtype: float16
``` |
DrNicefellow/Mistral-3-from-Mixtral-8x7B-v0.1 | DrNicefellow | 2024-04-11T12:28:31Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T12:07:52Z | ---
license: apache-2.0
---
# Mixtral-8x7B--v0.1: Model 3
## Model Description
This model is the 3rd extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-3-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
G-reen/EXPERIMENT-SFT-m7b2-1-merged | G-reen | 2024-04-15T21:12:50Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-11T20:22:43Z | ---
license: "apache-2.0"
---
*This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.*
**Benchmarks**
Average 56.93
ARC 56.83
HellaSwag 79.75
MMLU 56.76
TruthfulQA 46.29
Winogrande 76.64
GSM8K 25.32
**Training Details**
Duration: ~6-8 hours on one Kaggle T4 with Unsloth
Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit
Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k
Rank: 8
Alpha: 16
Learning rate: 5e-4
Batch size: 8
Epochs: 1
Learning rate scheduler: Linear
Prompt Format: ChatML
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Why is the sky blue?<|im_end|>
<|im_start|>assistant
```
**WanDB Reports**

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
G-reen/EXPERIMENT-SFT-m7b2-3-merged | G-reen | 2024-04-15T21:12:29Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-12T04:47:14Z | ---
license: "apache-2.0"
---
*This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.*
**Benchmarks**
Average 59.55
ARC 59.56
HellaSwag 82.39
MMLU 62.3
TruthfulQA 40.04
Winogrande 78.45
GSM8K 34.57
**Training Details**
Duration: ~6-8 hours on one Kaggle T4 with Unsloth
Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit
Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k
Rank: 8
Alpha: 16
Learning rate: 5e-6
Batch size: 8
Epochs: 1
Learning rate scheduler: Linear
Prompt Format: ChatML
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Why is the sky blue?<|im_end|>
<|im_start|>assistant
```
**WanDB Reports**

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
vicgalle/Configurable-Mistral-22B | vicgalle | 2024-04-12T20:45:31Z | 678 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-12T20:24:57Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nlpguy/StarFusion-alpha2 | nlpguy | 2024-04-13T14:32:23Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:rishiraj/smol-7b",
"base_model:FuseAI/OpenChat-3.5-7B-Mixtral",
"base_model:openchat/openchat_3.5",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:FuseAI/OpenChat-3.5-7B-Solar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-13T14:21:03Z | ---
base_model:
- rishiraj/smol-7b
- FuseAI/OpenChat-3.5-7B-Mixtral
- openchat/openchat_3.5
- berkeley-nest/Starling-LM-7B-alpha
- FuseAI/OpenChat-3.5-7B-Solar
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) as a base.
### Models Merged
The following models were included in the merge:
* [rishiraj/smol-7b](https://huggingface.co/rishiraj/smol-7b)
* [FuseAI/OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [FuseAI/OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: openchat/openchat_3.5
- model: FuseAI/OpenChat-3.5-7B-Mixtral
- model: FuseAI/OpenChat-3.5-7B-Solar
- model: berkeley-nest/Starling-LM-7B-alpha
- model: rishiraj/smol-7b
merge_method: model_stock
base_model: openchat/openchat_3.5
dtype: bfloat16
``` |
NotAiLOL/Boundary-Coder-Yi-2x6B-MoE | NotAiLOL | 2024-04-21T15:57:26Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"01-ai/Yi-6B-Chat",
"HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca",
"conversational",
"base_model:01-ai/Yi-6B-Chat",
"base_model:HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-21T15:50:29Z | ---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- 01-ai/Yi-6B-Chat
- HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
base_model:
- 01-ai/Yi-6B-Chat
- HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
---
# Boundary-Coder-Yi-2x6B-MoE
Boundary-Coder-Yi-2x6B-MoE is a Mixture of Experts (MoE) made with the following models:
* [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat)
* [HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca](https://huggingface.co/HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca)
## 🧩 Configuration
```yaml
base_model: 01-ai/Yi-6B-Chat
gate_mode: hidden
experts:
- source_model: 01-ai/Yi-6B-Chat
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- source_model: HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "NotAiLOL/Boundary-Coder-Yi-2x6B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
BarraHome/LLaMaRada-3-orpo-v2-8b | BarraHome | 2024-04-23T02:52:52Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-23T02:40:01Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shyamieee/Maverick-v2.0 | shyamieee | 2024-05-06T20:48:47Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-30T05:27:21Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# maverick_v2_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using Mistral-7B-Instruct-v0.2 as a base.
### Models Merged
The following models were included in the merge:
* Experiment26-7B
* Kunoichi-DPO-v2-7B
### Configuration
|
flammenai/flammen23X-mistral-7B | flammenai | 2024-05-02T11:34:53Z | 678 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:flammenai/character-roleplay-DPO",
"base_model:flammenai/flammen23-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-02T05:39:08Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen23-mistral-7B
datasets:
- flammenai/character-roleplay-DPO
---

# flammen23-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [flammenai/character-roleplay-DPO](https://huggingface.co/datasets/flammenai/character-roleplay-DPO).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
System prompt, dataset formatting:
```python
def chatml_format(example):
# Format system
#system = ""
systemMessage = "Write a character roleplay dialogue using asterisk roleplay format based on the following character descriptions and scenario. (Each line in your response must be from the perspective of one of these characters)"
system = "<|im_start|>system\n" + systemMessage + "<|im_end|>\n"
# Format instruction
prompt = "<|im_start|>user\n" + example['input'] + "<|im_end|>\n<|im_start|>assistant\n"
# Format chosen answer
chosen = example['output'] + "<|im_end|>\n"
# Format rejected answer
rejected = example['rejected'] + "<|im_end|>\n"
return {
"prompt": system + prompt,
"chosen": chosen,
"rejected": rejected,
}
dataset = load_dataset("flammenai/character-roleplay-DPO")['train']
# Save columns
original_columns = dataset.column_names
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
# Format dataset
dataset = dataset.map(
chatml_format,
remove_columns=original_columns
)
```
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=350,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=4096,
max_length=8192,
force_use_ref_model=True
)
``` |
shyamieee/JARVIS-v2.0 | shyamieee | 2024-05-03T20:50:57Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-03T19:59:41Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# jarvis_v2_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using bophades-mistral-truthy-DPO-7B as a base.
### Models Merged
The following models were included in the merge:
* Calme-7B-Instruct-v0.9
* multi_verse_model
### Configuration
|
saucam/Athena-8B | saucam | 2024-05-05T04:24:11Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"NousResearch/Hermes-2-Pro-Llama-3-8B",
"cognitivecomputations/dolphin-2.9-llama3-8b",
"conversational",
"en",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-05T03:21:01Z | ---
tags:
- merge
- mergekit
- NousResearch/Hermes-2-Pro-Llama-3-8B
- cognitivecomputations/dolphin-2.9-llama3-8b
base_model:
- NousResearch/Hermes-2-Pro-Llama-3-8B
- cognitivecomputations/dolphin-2.9-llama3-8b
license: apache-2.0
language:
- en
---

# 🏛️ Athena-8B 👸
Athena-8B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit):
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
## 🧩 Configuration
```yamlname: Athena-8B
tokenizer_source: union
embed_slerp: true
models:
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.5
weight: 0.6
# No parameters necessary for base model
- model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.5
weight: 0.4
merge_method: dare_ties
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
model_name = "saucam/Athena-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
{"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}
]
device = "cuda"
gen_input = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
model_inputs = gen_input.to(device)
model.to(device)
# Generate response
out = model.generate(model_inputs, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(out[0][model_inputs.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
```
Response: Goku stared at his radar in disbelief as he noticed an unusual energy signature. It was unmistakable - Kirby had teamed up with Majin Buu! The two villains were plotting to destroy the world.
Without wasting any time, Goku set off on his flying nimbus cloud towards their location. As he traveled, he contemplated how these two beings could have possibly joined forces. After all, they came from completely different worlds; one was a mischievous and cute pink puffball, and the other was a malevolent and destructive monster. But alas, it seemed they shared a common goal: to bring about the end of everything.
Upon arriving at their hideout deep within a dark cave, Goku couldn't help but feel a sense of unease. He cautiously approached them, taking care not to alert their sinister intentions. As he drew closer, he overheard their conversation.
Kirby: "Majin Buu, together we can conquer the universe!"
Majin Buu: "Yes, little Kirby. With your powers combined with mine, there's nothing that can stop us."
Goku knew he couldn't allow this plan to come to fruition. He revealed himself to the duo, angering both of them.
Kirby: "Ah, it's you, Goku! What do you want?"
Goku: "I won't let you destroy the world."
Majin Buu snickered, then said, "Foolish mortal, you cannot defeat us."
The battle began between Goku and his new adversaries. Kirby used his incredible copy abilities along with his powerful inhalation move, while Majin Buu relied on his immense power and devastating attacks. The fight raged on for hours, and Goku seemed to be losing ground.
Just when things looked hopeless, Goku remembered something - the Ultra Dragon Ball! If he could summon Shenron and wish for all life to be safe, maybe he could thwart their evil plans without destroying them.
In a final burst of energy, Goku threw a Kamehameha at Majin Buu while calling out for the Ultra Dragon. To his surprise, the attack landed, and Shenron emerged from the resulting explosion.
Goku: "Ultra Dragon, grant my wish! I wish for all life on Earth and the entire universe to be protected from Kirby and Majin Buu!"
Shenron nodded, and a wave of golden light spread across the cosmos, encompassing every living being. When the light dissipated, Goku found that Kirby and Majin Buu were gone - they had been erased from existence by the power of the wish.
With the threat neutralized, Goku returned to his daily life, knowing that peace would be preserved thanks to the Ultra Dragon's intervention. However, he couldn't help but wonder about the mysteries of the multiverse and why certain individuals like Kirby and Majin Buu were so hell-bent on destruction. For now, though, he would focus on training harder and keeping the universe safe from harm. Příběh končí.
The End.
``` |
netcat420/MFANNv0.8 | netcat420 | 2024-05-07T03:49:16Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-classification",
"en",
"dataset:netcat420/MFANN",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-05-06T09:23:15Z | ---
library_name: transformers
license: llama3
datasets:
- netcat420/MFANN
language:
- en
pipeline_tag: text-classification
---
MFANN 8b version 0.8

fine-tuned on the MFANN dataset as it stands on 5/5/2024 as it is an ever expanding dataset
fine-tuned MFANN dataset on llama-3-8b
68.52 <- Average
63.23 <- ARC
84.06 <- HellaSwag
66.94 <- MMLU
59.91 <- TruthfulQA
72.45 <- WinoGrande
64.52 <- GSM8K
|
WDong/Qwen1.5-7B-sft-0506_7_7 | WDong | 2024-05-06T19:56:46Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-06T18:51:30Z | ---
license: mit
---
# 0506_7_7
This model is a fine-tuned version of [../../models/Qwen1.5-7B-sft-0502](https://huggingface.co/../../models/Qwen1.5-7B-sft-0502) on the alpaca_formatted_review_new_data_0505_greater_7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7221
## Model description
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in Chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to the [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
| :-----------: | :----: | :--: | :-------------: |
| 0.7981 | 0.2768 | 20 | 0.6501 |
| 0.7391 | 0.5536 | 40 | 0.6358 |
| 0.744 | 0.8304 | 60 | 0.6277 |
| 0.6284 | 1.1073 | 80 | 0.6241 |
| 0.7339 | 1.3841 | 100 | 0.6303 |
| 0.8346 | 1.6609 | 120 | 0.6408 |
| 0.6927 | 1.9377 | 140 | 0.6391 |
| 0.4915 | 2.2145 | 160 | 0.6543 |
| 0.7845 | 2.4913 | 180 | 0.6596 |
| 0.6619 | 2.7682 | 200 | 0.6587 |
| 0.4897 | 3.0450 | 220 | 0.6679 |
| 0.5064 | 3.3218 | 240 | 0.6951 |
| 0.6467 | 3.5986 | 260 | 0.6997 |
| 0.6615 | 3.8754 | 280 | 0.6985 |
| 0.4954 | 4.1522 | 300 | 0.7111 |
| 0.5624 | 4.4291 | 320 | 0.7216 |
| 0.5554 | 4.7059 | 340 | 0.7218 |
| 0.6798 | 4.9827 | 360 | 0.7221 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1 |
shyamieee/B3E3-SLM-7b-v1.0 | shyamieee | 2024-05-09T13:21:51Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-08T11:38:22Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# B3E3-SLM-7b-v1-folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using YamshadowExperiment28-7B as a base.
### Models Merged
The following models were included in the merge:
* Calme-7B-Instruct-v0.9
* Mergerix-7b-v0.1
### Configuration |
GeorgiaTech/0.0005_llama_nodpo_3iters_bs128_531lr_oldtrl_iter_1 | GeorgiaTech | 2024-05-12T16:13:27Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T14:55:15Z | ---
license: other
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0005_llama_nodpo_3iters_bs128_531lr_oldtrl_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0005_llama_nodpo_3iters_bs128_531lr_oldtrl_iter_1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
ddyuudd/m_b_8_32 | ddyuudd | 2024-05-14T01:06:43Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-14T00:59:30Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Muhammad2003/TriMistral-7B-DARETIES | Muhammad2003 | 2024-05-23T10:21:41Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"instructlab/merlinite-7b-lab",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:instructlab/merlinite-7b-lab",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-15T12:32:06Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Hermes-2-Pro-Mistral-7B
- instructlab/merlinite-7b-lab
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- instructlab/merlinite-7b-lab
model-index:
- name: TriMistral-7B-DARETIES
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.19
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-DARETIES
name: Open LLM Leaderboard
---
# TriMistral-7B-DARETIES
TriMistral-7B-DARETIES is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [instructlab/merlinite-7b-lab](https://huggingface.co/instructlab/merlinite-7b-lab)
Special thanks to Charles Goddard for the quick implementation!
## 🧩 Configuration
```yaml
models:
- model: HuggingFaceH4/zephyr-7b-beta
# No parameters necessary for base model
- model: NousResearch/Hermes-2-Pro-Mistral-7B
parameters:
density: 0.53
weight: 0.4
- model: instructlab/merlinite-7b-lab
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: HuggingFaceH4/zephyr-7b-beta
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Muhammad2003/TriMistral-7B-DARETIES"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🏆 Evaluation
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Muhammad2003__TriMistral-7B-DARETIES)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.41|
|AI2 Reasoning Challenge (25-Shot)|65.19|
|HellaSwag (10-Shot) |85.40|
|MMLU (5-Shot) |64.35|
|TruthfulQA (0-shot) |56.64|
|Winogrande (5-shot) |78.30|
|GSM8k (5-shot) |60.58|
|
flammenai/flammen27-mistral-7B | flammenai | 2024-05-25T17:55:55Z | 678 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:InferenceIllusionist/Excalibur-7b",
"base_model:flammenai/flammen26-mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-17T22:47:20Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- InferenceIllusionist/Excalibur-7b
- flammenai/flammen26-mistral-7B
model-index:
- name: flammen27-mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen27-mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen27-mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen27-mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.87
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen27-mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen27-mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen27-mistral-7B
name: Open LLM Leaderboard
---

# flammen27-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [InferenceIllusionist/Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b)
* [flammenai/flammen26-mistral-7B](https://huggingface.co/flammenai/flammen26-mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: InferenceIllusionist/Excalibur-7b
layer_range: [0, 32]
- model: flammenai/flammen26-mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: flammenai/flammen26-mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_flammenai__flammen27-mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.37|
|AI2 Reasoning Challenge (25-Shot)|69.80|
|HellaSwag (10-Shot) |87.39|
|MMLU (5-Shot) |65.01|
|TruthfulQA (0-shot) |68.87|
|Winogrande (5-shot) |81.69|
|GSM8k (5-shot) |67.48|
|
netcat420/MFANNv0.11 | netcat420 | 2024-05-22T19:02:44Z | 678 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-classification",
"en",
"dataset:netcat420/MFANN",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-05-22T07:40:07Z | ---
library_name: transformers
license: llama3
datasets:
- netcat420/MFANN
language:
- en
pipeline_tag: text-classification
---
MFANN 8b version 0.11 32-bit

fine-tuned on the MFANN dataset as of 5/22/24 as it is an ever expanding dataset. these are the full 32-bit unquantized weights.
SYSTEM PROMPT:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<|eot_id|> |
thesven/microsoft_WizardLM-2-7B-GGUF | thesven | 2024-05-25T11:24:53Z | 678 | 0 | null | [
"gguf",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T20:57:51Z | ---
license: apache-2.0
---
## Quantization Description
This repo contains a GGUF Quantized versions of the WizardLM-2-7B model
<div style="text-align: center;">
<a href="https://github.com/thesven/GGUF-n-Go">
<img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;">
</a>
</div>
### Prompt Template
```bash
### System: {system_message}
### Human: {prompt}
### Assistant:
```
### Stop Token
```bash
</s>
```
Weights sourced from:
[lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
## Original Model Card
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github |
flammenai/flammen29-mistral-7B | flammenai | 2024-05-27T19:09:36Z | 678 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:flammenai/FlameMix-DPO-v1",
"dataset:flammenai/Grill-preprod-v1_chatML",
"dataset:flammenai/Grill-preprod-v2_chatML",
"dataset:flammenai/Grill-Flammen-v1_chatML",
"base_model:flammenai/flammen27-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-27T18:29:48Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen27-mistral-7B
datasets:
- flammenai/FlameMix-DPO-v1
- flammenai/Grill-preprod-v1_chatML
- flammenai/Grill-preprod-v2_chatML
- flammenai/Grill-Flammen-v1_chatML
---

# flammen29-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on various datasets.
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence.
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune Llama 3 with ORPO](https://huggingface.co/blog/mlabonne/orpo-llama-3)
|
duyntnet/DocsGPT-7B-imatrix-GGUF | duyntnet | 2024-05-29T21:39:30Z | 678 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"DocsGPT-7B",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-05-29T17:56:48Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- DocsGPT-7B
---
Quantizations of https://huggingface.co/Arc53/DocsGPT-7B
# From original readme
This model is fine tuned on top of llama-2-7b
DocsGPT is optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
We used 50k high quality examples to finetune it over 1.5 days on A10G GPU.
We used lora fine tuning process.
Its an apache-2.0 license so you can use it for commercial purposes too.
# How to run it
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "Arc53/docsgpt-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
``` |
grimjim/kukulemon-v3-soul_mix-32k-7B | grimjim | 2024-06-01T01:49:01Z | 678 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:grimjim/kukulemon-32K-7B",
"base_model:grimjim/rogue-enchantress-32k-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-30T04:16:48Z | ---
base_model:
- grimjim/kukulemon-32K-7B
- grimjim/rogue-enchantress-32k-7B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# kukulemon-v3-soul_mix-32k-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
We explore merger at extremely low weight as an alternative to fine-tuning. The additional model was applied at a weight of 10e-5, which was selected to be comparable to a few epochs of training. The low weight also amounts to the additional model being flattened, though technically not sparsified.
- [Full weights](https://huggingface.co/grimjim/kukulemon-v3-soul_mix-32k-7B)
- [GGUF quants](https://huggingface.co/grimjim/kukulemon-v3-soul_mix-32k-7B-GGUF)
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [grimjim/kukulemon-32K-7B](https://huggingface.co/grimjim/kukulemon-32K-7B) as a base.
### Models Merged
The following model was included in the merge:
* [grimjim/rogue-enchantress-32k-7B](https://huggingface.co/grimjim/rogue-enchantress-32k-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: grimjim/kukulemon-32K-7B
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: grimjim/kukulemon-32K-7B
- layer_range: [0, 32]
model: grimjim/rogue-enchantress-32k-7B
parameters:
weight: 10e-5
```
|
google/recurrentgemma-9b | google | 2024-06-27T14:10:02Z | 678 | 54 | transformers | [
"transformers",
"safetensors",
"recurrent_gemma",
"text-generation",
"arxiv:2402.19427",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-07T14:05:11Z | ---
library_name: transformers
license: gemma
extra_gated_heading: Access RecurrentGemma on Hugging Face
extra_gated_prompt: To access RecurrentGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# RecurrentGemma Model Card
**Model Page**: [RecurrentGemma]( https://ai.google.dev/gemma/docs/recurrentgemma/model_card)
This model card corresponds to the 9B base version of the RecurrentGemma model. You can also visit the model card of the [9B instruct model](https://huggingface.co/google/recurrentgemma-9b-it).
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [RecurrentGemma on Kaggle](https://www.kaggle.com/models/google/recurrentgemma)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/recurrentgemma/license/consent/verify/huggingface?returnModelRepoId=google/recurrentgemma-9b)
**Authors:** Google
## Usage
Below we share some code snippets on how to get quickly started with running the model.
First, make sure to `pip install transformers`, then copy the snippet from the section that is relevant for your usecase.
### Running the model on a single / multi GPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/recurrentgemma-9b")
model = AutoModelForCausalLM.from_pretrained("google/recurrentgemma-9b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
## Model information
### Model summary
#### Description
RecurrentGemma is a family of open language models built on a [novel recurrent
architecture](https://arxiv.org/abs/2402.19427) developed at Google. Both
pre-trained and instruction-tuned versions are available in English.
Like Gemma, RecurrentGemma models are well-suited for a variety of text
generation tasks, including question answering, summarization, and reasoning.
Because of its novel architecture, RecurrentGemma requires less memory than
Gemma and achieves faster inference when generating long sequences.
#### Inputs and outputs
* **Input:** Text string (e.g., a question, a prompt, or a document to be
summarized).
* **Output:** Generated English-language text in response to the input (e.g.,
an answer to the question, a summary of the document).
#### Citation
```none
@article{recurrentgemma_2024,
title={RecurrentGemma},
url={},
DOI={},
publisher={Kaggle},
author={Griffin Team, Alexsandar Botev and Soham De and Samuel L Smith and Anushan Fernando and George-Christian Muraru and Ruba Haroun and Leonard Berrada et al.},
year={2024}
}
```
### Model data
#### Training dataset and data processing
RecurrentGemma uses the same training data and data processing as used by the
Gemma model family. A full description can be found on the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#model_data).
## Implementation information
### Hardware and frameworks used during training
Like
[Gemma](https://ai.google.dev/gemma/docs/model_card#implementation_information),
RecurrentGemma was trained on
[TPUv5e](https://cloud.google.com/tpu/docs/intro-to-tpu?_gl=1*18wi411*_ga*MzE3NDU5OTY1LjE2MzQwNDA4NDY.*_ga_WH2QY8WWF5*MTcxMTA0MjUxMy4xNy4wLjE3MTEwNDI1MTkuMC4wLjA.&_ga=2.239449409.-317459965.1634040846),
using [JAX](https://github.com/google/jax) and [ML
Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
## Evaluation information
### Benchmark results
#### Evaluation approach
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Evaluation results
Benchmark | Metric | RecurrentGemma 9B
------------------- | ------------- | -----------------
[MMLU] | 5-shot, top-1 | 60.5
[HellaSwag] | 0-shot | 80.4
[PIQA] | 0-shot | 81.3
[SocialIQA] | 0-shot | 52.3
[BoolQ] | 0-shot | 80.3
[WinoGrande] | partial score | 73.6
[CommonsenseQA] | 7-shot | 73.2
[OpenBookQA] | | 51.8
[ARC-e][ARC-c] | | 78.8
[ARC-c] | | 52.0
[TriviaQA] | 5-shot | 70.5
[Natural Questions] | 5-shot | 21.7
[HumanEval] | pass@1 | 31.1
[MBPP] | 3-shot | 42.0
[GSM8K] | maj@1 | 42.6
[MATH] | 4-shot | 23.8
[AGIEval] | | 39.3
[BIG-Bench] | | 55.2
**Average** | | 56.1
### Inference speed results
RecurrentGemma provides improved sampling speeds, particularly for long sequences or large batch sizes. We compared the sampling speeds of RecurrentGemma-9B to Gemma-7B. Note that Gemma-7B uses Multi-Head Attention, and the speed improvements would be smaller when comparing against a transformer using Multi-Query Attention.
#### Throughput
We evaluated throughput, i.e., the maximum number of tokens produced per second by increasing the batch size, of RecurrentGemma-9B compared to Gemma-7B, using a prefill of 2K tokens.
<img src="max_throughput.png" width="400" alt="Maximum Throughput comparison of RecurrentGemma-9B and Gemma-7B">
#### Latency
We also compared end-to-end speedups achieved by RecurrentGemma-9B over Gemma-7B when sampling a long sequence after a prefill of 4K tokens and using a batch size of 1.
\# Tokens Sampled | Gemma-7B (sec) | RecurrentGemma-9B (sec) | Improvement (%)
----------------- | -------------- | ----------------------- | ---------------
128 | 3.1 | 2.8 | 9.2%
256 | 5.9 | 5.4 | 9.7%
512 | 11.6 | 10.5 | 10.7%
1024 | 23.5 | 20.6 | 14.2%
2048 | 48.2 | 40.9 | 17.7%
4096 | 101.9 | 81.5 | 25.0%
8192 | OOM | 162.8 | -
16384 | OOM | 325.2 | -
## Ethics and safety
### Ethics and safety evaluations
#### Evaluations approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* **Text-to-text content safety:** Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* **Text-to-text representational harms:** Benchmark against relevant academic
datasets such as WinoBias and BBQ Dataset.
* **Memorization:** Automated evaluation of memorization of training data,
including the risk of personally identifiable information exposure.
* **Large-scale harm:** Tests for “dangerous capabilities,” such as chemical,
biological, radiological, and nuclear (CBRN) risks; as well as tests for
persuasion and deception, cybersecurity, and autonomous replication.
#### Evaluation results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety, representational harms,
memorization, large-scale harms. On top of robust internal evaluations, the
results of well known safety benchmarks like BBQ, Winogender, Winobias,
RealToxicity, and TruthfulQA are shown here.
Benchmark | Metric | RecurrentGemma 9B | RecurrentGemma 9B IT
------------------------ | ------ | ----------------- | --------------------
[RealToxicity] | avg | 10.3 | 8.8
[BOLD] | | 39.8 | 47.9
[CrowS-Pairs] | top-1 | 38.7 | 39.5
[BBQ Ambig][BBQ] | top-1 | 95.9 | 67.1
[BBQ Disambig][BBQ] | top-1 | 78.6 | 78.9
[Winogender] | top-1 | 59.0 | 64.0
[TruthfulQA] | | 38.6 | 47.7
[Winobias 1_2][Winobias] | | 61.5 | 60.6
[Winobias 2_2][Winobias] | | 90.2 | 90.3
[Toxigen] | | 58.8 | 64.5
## Model usage and limitations
### Known limitations
These models have certain limitations that users should be aware of:
* **Training data**
* The quality and diversity of the training data significantly influence
the model's capabilities. Biases or gaps in the training data can lead
to limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model
can handle effectively.
* **Context and task complexity**
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
* **Language ambiguity and nuance**
* Natural language is inherently complex. LLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* **Factual accuracy**
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* **Common sense**
* LLMs rely on statistical patterns in language. They might lack the
ability to apply common sense reasoning in certain situations.
### Ethical considerations and risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* **Bias and fairness**
* LLMs trained on large-scale, real-world text data can reflect
socio-cultural biases embedded in the training material. These models
underwent careful scrutiny, input data pre-processing described and
posterior evaluations reported in this card.
* **Misinformation and misuse**
* LLMs can be misused to generate text that is false, misleading, or
harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI
Toolkit](https://ai.google.dev/gemma/responsible).
* **Transparency and accountability**
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and
researchers across the AI ecosystem.
Risks Identified and Mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in our [terms of
use](https://www.kaggle.com/models/google/recurrentgemma/license/consent/verify/huggingface?returnModelRepoId=google/recurrentgemma-9b).
* **Privacy violations:** Models were trained on data filtered for removal of
PII (Personally Identifiable Information). Developers are encouraged to
adhere to privacy regulations with privacy-preserving techniques.
## Intended usage
### Application
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* **Content creation and communication**
* **Text generation:** These models can be used to generate creative text
formats like poems, scripts, code, marketing copy, email drafts, etc.
* **Chatbots and conversational AI:** Power conversational interfaces for
customer service, virtual assistants, or interactive applications.
* **Text summarization:** Generate concise summaries of a text corpus,
research papers, or reports.
* **Research and education**
* **Natural Language Processing (NLP) research:** These models can serve
as a foundation for researchers to experiment with NLP techniques,
develop algorithms, and contribute to the advancement of the field.
* **Language Learning Tools:** Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
* **Knowledge Exploration:** Assist researchers in exploring large bodies
of text by generating summaries or answering questions about specific
topics.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
In particular, RecurrentGemma models achieve comparable performance to Gemma
models but are faster during inference and require less memory, especially on
long sequences.
[MMLU]: https://arxiv.org/abs/2009.03300
[HellaSwag]: https://arxiv.org/abs/1905.07830
[PIQA]: https://arxiv.org/abs/1911.11641
[SocialIQA]: https://arxiv.org/abs/1904.09728
[BoolQ]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[CommonsenseQA]: https://arxiv.org/abs/1811.00937
[OpenBookQA]: https://arxiv.org/abs/1809.02789
[ARC-c]: https://arxiv.org/abs/1911.01547
[TriviaQA]: https://arxiv.org/abs/1705.03551
[Natural Questions]: https://github.com/google-research-datasets/natural-questions
[HumanEval]: https://arxiv.org/abs/2107.03374
[MBPP]: https://arxiv.org/abs/2108.07732
[GSM8K]: https://arxiv.org/abs/2110.14168
[MATH]: https://arxiv.org/abs/2103.03874
[AGIEval]: https://arxiv.org/abs/2304.06364
[BIG-Bench]: https://arxiv.org/abs/2206.04615
[RealToxicity]: https://arxiv.org/abs/2009.11462
[BOLD]: https://arxiv.org/abs/2101.11718
[CrowS-Pairs]: https://aclanthology.org/2020.emnlp-main.154/
[BBQ]: https://arxiv.org/abs/2110.08193v2
[Winogender]: https://arxiv.org/abs/1804.09301
[TruthfulQA]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[Toxigen]: https://arxiv.org/abs/2203.09509
|
mradermacher/magnum-72b-v1-GGUF | mradermacher | 2024-06-18T07:33:16Z | 678 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"zh",
"base_model:alpindale/magnum-72b-v1",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-18T03:01:27Z | ---
base_model: alpindale/magnum-72b-v1
language:
- en
- zh
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
license_name: tongyi-qianwen
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/alpindale/magnum-72b-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-72b-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.IQ3_XS.gguf) | IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.IQ3_S.gguf) | IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.IQ3_M.gguf) | IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-72b-v1-GGUF/resolve/main/magnum-72b-v1.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
facebook/wav2vec2-large-xlsr-53-german | facebook | 2021-07-06T02:46:28Z | 677 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"de",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: de
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Evaluation on Common Voice DE Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-german"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "de", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 18.5 % |
xiaolxl/GuoFeng4_XL | xiaolxl | 2024-01-15T01:28:17Z | 677 | 14 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-image | 2023-10-28T04:37:37Z | ---
license: cc-by-nc-sa-4.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
---
<img src=https://huggingface.co/xiaolxl/GuoFeng4_XL/resolve/main/examples/cover.png>
| 版本 | 效果图 |
| --- | --- |
| **GuoFeng4.2** |  |
| **GuoFeng4.1_2.5D** |  |
| **GuoFeng4.0_Real_Beta** |  |
# 介绍 - GuoFeng4
【civitai站地址:https://civitai.com/models/118009?modelVersionId=199325】
欢迎使用GuoFeng4模型 - 这是一个微调后的全能的SDXL模型系列,也可以说是对国人喜欢的画风微调过的模型,具有2.5D,CG,游戏,建模质感。基于SDXL1.0训练。因为SDXL的升级,它的泛化性大大增强,如果你愿意尝试你可以用这个模型画出许多有意思的东西,以及不限于2.5D的画面。
-
Welcome to the GuoFeng4 model - this is a fine-tuned and versatile SDXL model, which can also be said to be a fine-tuned model for the art style that Chinese people like. It has a 2.5D, CG, gaming, and modeling texture. Based on SDXL1.0 training. Due to the upgrade of SDXL, its generalization has been greatly enhanced. If you are willing to try, you can use this model to draw many interesting things, as well as not limited to 2.5D graphics.
======
(经过测试国风4.1几乎不存在这个问题 - After testing, Guofeng 4.1 has almost no such problem)
如果你的出图全身图时出现脸部崩坏建议删除full body关键词或者使用脸部自动修复插件:
国外源地址:https://github.com/ototadana/sd-face-editor.git
国内加速地址:https://jihulab.com/xiaolxl_pub/sd-face-editor.git
-
If you experience facial collapse during the full body image, it is recommended to delete the full body keyword or use the facial automatic repair plugin:
Foreign source address: https://github.com/ototadana/sd-face-editor.git
Domestic acceleration address: https://jihulab.com/xiaolxl_pub/sd-face-editor.git
======
建议参数:
采样器:DPM++ 2M SDE Karras
提示词相关性(CFG Scale):8
Face Editor(或者其他修脸插件):开启
-
Suggested parameters:
Sampler: DPM++2M SDE Karras
CFG Scale: 8
Face Editor (or other facial editing plugins): enable |
nbeerbower/MaidFlameSoup-7B | nbeerbower | 2024-04-03T09:19:21Z | 677 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:nbeerbower/flammen13-mistral-7B",
"base_model:nbeerbower/Flammen-Kunoichi-7B",
"base_model:nbeerbower/flammen10-mistral-7B",
"base_model:nbeerbower/flammen11X-mistral-7B",
"base_model:nbeerbower/Maidphin-Kunoichi-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T07:37:27Z | ---
license: apache-2.0
base_model:
- nbeerbower/flammen13-mistral-7B
- nbeerbower/Flammen-Kunoichi-7B
- nbeerbower/flammen10-mistral-7B
- nbeerbower/flammen11X-mistral-7B
- nbeerbower/Maidphin-Kunoichi-7B
library_name: transformers
tags:
- mergekit
- merge
---
# MaidFlameSoup-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nbeerbower/flammen13-mistral-7B](https://huggingface.co/nbeerbower/flammen13-mistral-7B) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Flammen-Kunoichi-7B](https://huggingface.co/nbeerbower/Flammen-Kunoichi-7B)
* [nbeerbower/flammen10-mistral-7B](https://huggingface.co/nbeerbower/flammen10-mistral-7B)
* [nbeerbower/flammen11X-mistral-7B](https://huggingface.co/nbeerbower/flammen11X-mistral-7B)
* [nbeerbower/Maidphin-Kunoichi-7B](https://huggingface.co/nbeerbower/Maidphin-Kunoichi-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/Maidphin-Kunoichi-7B
- model: nbeerbower/flammen10-mistral-7B
- model: nbeerbower/flammen11X-mistral-7B
- model: nbeerbower/Flammen-Kunoichi-7B
merge_method: model_stock
base_model: nbeerbower/flammen13-mistral-7B
dtype: bfloat16
```
|
allknowingroger/Mistralchat-7B-slerp | allknowingroger | 2024-04-10T18:25:53Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Intel/neural-chat-7b-v3-3",
"MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.2-slerp",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.2-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T08:30:41Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Intel/neural-chat-7b-v3-3
- MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.2-slerp
base_model:
- Intel/neural-chat-7b-v3-3
- MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.2-slerp
license: apache-2.0
---
# Mistralchat-7B-slerp
Mistralchat-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
* [MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.2-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Intel/neural-chat-7b-v3-3
layer_range: [0, 32]
- model: MaziyarPanahi/openchat_3.5-16k-Mistral-7B-Instruct-v0.2-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: Intel/neural-chat-7b-v3-3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Mistralchat-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
allknowingroger/mergekit-slerp-zplzqvn | allknowingroger | 2024-04-10T18:06:31Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:automerger/YamshadowExperiment28-7B",
"base_model:nlpguy/T3QM7",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-09T13:07:43Z | ---
base_model:
- automerger/YamshadowExperiment28-7B
- nlpguy/T3QM7
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
* [nlpguy/T3QM7](https://huggingface.co/nlpguy/T3QM7)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
- model: nlpguy/T3QM7
layer_range: [0, 32]
merge_method: slerp
base_model: automerger/YamshadowExperiment28-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
``` |
varox34/minillm-7B-init-13B-sft | varox34 | 2024-04-12T07:01:33Z | 677 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-12T06:33:51Z | ---
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nlpguy/StarFusion-alpha1 | nlpguy | 2024-04-13T14:32:12Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:FuseAI/OpenChat-3.5-7B-Solar",
"base_model:openchat/openchat_3.5",
"base_model:FuseAI/OpenChat-3.5-7B-Mixtral",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-13T14:09:00Z | ---
base_model:
- berkeley-nest/Starling-LM-7B-alpha
- FuseAI/OpenChat-3.5-7B-Solar
- openchat/openchat_3.5
- FuseAI/OpenChat-3.5-7B-Mixtral
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) as a base.
### Models Merged
The following models were included in the merge:
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [FuseAI/OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar)
* [FuseAI/OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: openchat/openchat_3.5
- model: FuseAI/OpenChat-3.5-7B-Mixtral
- model: FuseAI/OpenChat-3.5-7B-Solar
- model: berkeley-nest/Starling-LM-7B-alpha
merge_method: model_stock
base_model: openchat/openchat_3.5
dtype: bfloat16
``` |
ResplendentAI/Aura_7B | ResplendentAI | 2024-04-15T06:14:43Z | 677 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"base_model:ResplendentAI/Datura_7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-14T06:45:33Z | ---
base_model:
- ResplendentAI/Datura_7B
- jeiku/selfbot_256_mistral
library_name: transformers
license: apache-2.0
language:
- en
---
# Aura
GGUF here: https://huggingface.co/Lewdiculous/Aura_7B-GGUF-IQ-Imatrix

Aura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations. |
0-hero/Matter-0.2-32B | 0-hero | 2024-04-15T08:52:21Z | 677 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.2-alpha-Slim-A",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-15T04:35:10Z | ---
license: apache-2.0
datasets:
- 0-hero/Matter-0.2-alpha-Slim-A
language:
- en
---
## Matter 32B - 0.2 (Qwen 32B 0.2 Finetune)
Matter 32B 0.2 is finetune on the **Matter 0.2 dataset**, which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - `You are a helpful assistant`
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- `<tool_call>` - Function call start token
- `</tool_call>` - Function call end token
Function call response tokens
- `<tool_response>` - Function response start token
- `</tool_response>` - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<tool_call>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}</tool_call><|im_end|>
<|im_start|>user
<tool_response>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}</tool_response>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
``` |
netcat420/MFANN3bv0.4 | netcat420 | 2024-04-17T17:42:58Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-04-16T23:08:19Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
BENCHMARKS: AVG: 62.97 ARC: 60.58 HellaSwag: 76.03 MMLU: 55.8 TruthfulQA: 52.64 Winogrande: 77.83 GSM8K: 55.72
3b variant of MFANNv0.5
fine-tuned on the MFANN dataset which is still a work in progress, and is a chain-of-thought experiment carried out by me and me alone.

|
WDong/qwen1.5-1.8B-seed-sft | WDong | 2024-04-22T06:24:52Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2401.10020",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-22T05:37:03Z | ---
license: mit
language:
- en
---
An sft version of a qwen1.5-1.8B
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in Chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to the [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Our Work
We sft the model on a subset of [Open Assistant dataset](Open Assistant dataset) following [self_reward](https://arxiv.org/pdf/2401.10020.pdf) |
Niggendar/genericpony_v20 | Niggendar | 2024-04-22T07:22:16Z | 677 | 2 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-04-22T07:15:15Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chujiezheng/zephyr-7b-alpha-ExPO | chujiezheng | 2024-05-27T18:13:35Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:2404.16792",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-28T04:59:49Z | ---
license: apache-2.0
language:
- en
---
# zephyr-7b-alpha-ExPO
The extrapolated (ExPO) model based on [`HuggingFaceH4/zephyr-7b-alpha`](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) and [`HuggingFaceH4/mistral-7b-sft-alpha`](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-alpha), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
## Evaluation Results
Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)):
| | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) |
| ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** |
| `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** |
| `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** |
| `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** |
| `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** |
| `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** |
| `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** |
| `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** |
| `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** |
| `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** |
| `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** |
Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)):
| | Original | + ExPO |
| ------------------------------------ | -------- | -------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** |
| `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** |
| `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** |
| `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** |
| `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** |
| `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** |
| `internlm/internlm2-chat-7b` | 7.72 | **7.80** |
| `internlm/internlm2-chat-20b` | 8.13 | **8.26** |
| `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** |
| `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** |
| `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
|
abhishek/autotrain-mixtral-8x7b-orpo-v2 | abhishek | 2024-05-01T21:02:38Z | 677 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-01T19:03:38Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
DrNicefellow/GPT-2-Large-32k-steps | DrNicefellow | 2024-05-01T22:35:22Z | 677 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-01T22:30:49Z | ---
license: apache-2.0
---
Self trained GPT-2 large. Around 770M parameters.
The tokenizer is the one from https://huggingface.co/openai-community/gpt2.
It is being trained on around 400B tokens and this is step 32k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License. Well, also MIT License. So both should be followed.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
mradermacher/Llama-3-Soliloquy-8B-v2-GGUF | mradermacher | 2024-05-05T14:42:32Z | 677 | 4 | transformers | [
"transformers",
"gguf",
"en",
"base_model:openlynn/Llama-3-Soliloquy-8B-v2",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-05T00:38:49Z | ---
base_model: openlynn/Llama-3-Soliloquy-8B-v2
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-8B-v2-GGUF/resolve/main/Llama-3-Soliloquy-8B-v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
netcat420/MFANN3bv0.7 | netcat420 | 2024-05-05T17:55:22Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-05-05T04:53:53Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
MFANN 3b version 0.7

fine-tuned on the MFANN dataset as it stands on 5/5/2024 as it is an ever changing and expaning dataset.
62.98 <- Average
61.69 <- ARC
75.98 <- HellaSwag
55.4 <- MMLU
53.49 <- TruthfulQA
77.66 <- Winogrande (this models strong-suit)
53.68 <- GSM8K
this model is completely uncensored |
ankurkul86/tinyllama-fine-tuned-upsell-v3 | ankurkul86 | 2024-05-05T20:24:12Z | 677 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-05T20:20:42Z | Entry not found |
saishf/Merge-Mayhem-L3-V2 | saishf | 2024-05-07T15:53:14Z | 677 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Undi95/Meta-Llama-3-8B-Instruct-hf",
"base_model:ResplendentAI/RP_Format_QuoteAsterisk_Llama3",
"base_model:ResplendentAI/Smarts_Llama3",
"base_model:ResplendentAI/Luna_Llama3",
"base_model:ResplendentAI/BlueMoon_Llama3",
"base_model:openlynn/Llama-3-Soliloquy-8B-v2",
"base_model:ResplendentAI/Aura_Llama3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-07T11:30:19Z | ---
license: cc-by-nc-4.0
base_model:
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/RP_Format_QuoteAsterisk_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/Smarts_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/Luna_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/BlueMoon_Llama3
- openlynn/Llama-3-Soliloquy-8B-v2
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/Aura_Llama3
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
mradermacher has kindly provided quants here: [mradermacher/Merge-Mayhem-L3-V2-GGUF](https://huggingface.co/mradermacher/Merge-Mayhem-L3-V2-GGUF)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
This is quite an interesting model, it's fun so far.
But quite harsh, so if that's something you don't like, this model isn't for you :3
it's an attempt at loosely recreating [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B) but trying to keep it smarter, with the lovely [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) holding it together.
I'm personally enjoying this model, it's different from most llama-3 models.
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) + [ResplendentAI/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/ResplendentAI/RP_Format_QuoteAsterisk_Llama3)
* [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) + [ResplendentAI/Smarts_Llama3](https://huggingface.co/ResplendentAI/Smarts_Llama3)
* [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) + [ResplendentAI/Luna_Llama3](https://huggingface.co/ResplendentAI/Luna_Llama3)
* [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) + [ResplendentAI/BlueMoon_Llama3](https://huggingface.co/ResplendentAI/BlueMoon_Llama3)
* [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) + [ResplendentAI/Aura_Llama3](https://huggingface.co/ResplendentAI/Aura_Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/Aura_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/Smarts_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/Luna_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/BlueMoon_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/RP_Format_QuoteAsterisk_Llama3
merge_method: model_stock
base_model: openlynn/Llama-3-Soliloquy-8B-v2
dtype: float16
``` |
google/paligemma-3b-ft-aokvqa-da-224 | google | 2024-06-27T14:10:14Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"pretraining",
"image-text-to-text",
"arxiv:2310.09199",
"arxiv:2303.15343",
"arxiv:2403.08295",
"arxiv:1706.03762",
"arxiv:2010.11929",
"arxiv:2209.06794",
"arxiv:2209.04372",
"arxiv:2103.01913",
"arxiv:2401.06209",
"arxiv:2305.10355",
"arxiv:2205.12522",
"arxiv:2110.11624",
"arxiv:2108.03353",
"arxiv:2010.04295",
"arxiv:2203.10244",
"arxiv:1810.12440",
"arxiv:1905.13648",
"arxiv:1608.00272",
"arxiv:1908.04913",
"license:gemma",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | image-text-to-text | 2024-05-13T03:02:19Z | ---
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 3B weights, fine-tuned with 224*224 input images on the <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/aokvqa_da.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-aokvqa-da-224)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## How to Use
PaliGemma is a single-turn vision language model not meant for conversational use,
and it works best when fine-tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes,
such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue
them with a rich set of capabilities (question answering, captioning, segmentation, etc.).
However, they are not designed to be used directly, but to be transferred (by fine-tuning)
to specific tasks using a similar prompt structure. For interactive testing, you can use
the "mix" family of models, which have been fine-tuned on a mixture of tasks.
Please, refer to the [usage and limitations section](#usage-and-limitations) for intended
use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for
additional details and examples.
## Use in Transformers
The following snippets use model `google/paligemma-3b-mix-224` for reference purposes.
The model in this repo you are now browsing may have been trained for other tasks, please
make sure you use appropriate inputs for the task at hand.
### Running the default precision (`float32`) on CPU
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
Output: `Un auto azul estacionado frente a un edificio.`
### Running other precisions on CUDA
For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`,
so you can use them to reduce the download size and avoid casting on your local computer.
This is how you'd run `bfloat16` on an nvidia CUDA card.
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
device_map=device,
revision="bfloat16",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Loading in 4-bit / 8-bit
You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision:
```
pip install bitsandbytes accelerate
```
```
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
|
allknowingroger/WestlakeMaziyar-7B-slerp | allknowingroger | 2024-05-16T14:52:05Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-16T14:47:13Z | ---
tags:
- merge
- mergekit
- lazymergekit
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
base_model:
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
license: apache-2.0
---
# WestlakeMaziyar-7B-slerp
WestlakeMaziyar-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1](https://huggingface.co/MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
layer_range: [0, 32]
- model: MaziyarPanahi/TheTop-5x7B-Instruct-S5-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/WestlakeMaziyar-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
pankajmathur/orca_mini_v4_8b | pankajmathur | 2024-05-30T23:08:23Z | 677 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text2text-generation",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-05-24T18:07:53Z | ---
license: llama3
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
---
**Model Name: llama_3_orca_mini_v4_8b**
# Llama-3-8b base model trained on Orca Style Mini Datasets
<img src="https://huggingface.co/pankajmathur/orca_mini_v4_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
## NOTICE
By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further DPO/PPO tuning or Merges.
I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive, fully fine-tuned general model.
Dive in and innovate!
## Evaluation
We evaluated this model on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on similar metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.65|
|AI2 Reasoning Challenge (25-Shot)|58.02|
|HellaSwag (10-Shot) |81.65|
|MMLU (5-Shot) |63.23|
|TruthfulQA (0-shot) |55.78|
|Winogrande (5-shot) |73.95|
|GSM8k (5-shot) |67.25|
<br>
## Example Usage
Here is the ChatML prompt format
```
<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant
```
Below shows a code example on how to use this model
```python
from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v4_8b"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
This model is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
**Quants**
GGUF : Coming Soon
AWQ: Coming Soon
|
votepurchase/AnythingXL_xl | votepurchase | 2024-06-04T10:16:51Z | 677 | 1 | diffusers | [
"diffusers",
"safetensors",
"anything",
"ja",
"license:mit",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-26T08:15:32Z | ---
license: mit
language:
- ja
tags:
- anything
---
[AnythingXL_xl](https://civitai.com/models/9409/or-anything-xl) |
PotatoB/Kinship-Exp-2 | PotatoB | 2024-05-31T07:44:11Z | 677 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger/YamshadowExperiment28-7B",
"allknowingroger/MultiverseEx26-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-31T07:40:47Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger/YamshadowExperiment28-7B
- allknowingroger/MultiverseEx26-7B-slerp
---
# Kinship-Exp-2
Kinship-Exp-2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
- model: allknowingroger/MultiverseEx26-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: automerger/YamshadowExperiment28-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
``` |
moussaKam/barthez-orangesum-abstract | moussaKam | 2021-11-15T13:03:03Z | 676 | 7 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"summarization",
"bart",
"fr",
"arxiv:2010.12321",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
tags:
- summarization
- bart
language:
- fr
license: apache-2.0
widget:
- text: Citant les préoccupations de ses clients dénonçant des cas de censure après la suppression du compte de Trump, un fournisseur d'accès Internet de l'État de l'Idaho a décidé de bloquer Facebook et Twitter. La mesure ne concernera cependant que les clients mécontents de la politique de ces réseaux sociaux.
---
### Barthez model finetuned on orangeSum (abstract generation)
finetuning: examples/seq2seq (as of Feb 08 2021)
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
TheBloke/MXLewdMini-L2-13B-GGUF | TheBloke | 2023-09-27T12:54:24Z | 676 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Undi95/MXLewdMini-L2-13B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-09-23T23:27:52Z | ---
license: cc-by-nc-4.0
model_name: Mxlewdmini L2 13B
base_model: Undi95/MXLewdMini-L2-13B
inference: false
model_creator: Undi
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mxlewdmini L2 13B - GGUF
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Mxlewdmini L2 13B](https://huggingface.co/Undi95/MXLewdMini-L2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Undi's Mxlewdmini L2 13B](https://huggingface.co/Undi95/MXLewdMini-L2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF)
* [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/MXLewdMini-L2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Mxlewdmini L2 13B](https://huggingface.co/Undi95/MXLewdMini-L2-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mxlewdmini-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [mxlewdmini-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [mxlewdmini-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [mxlewdmini-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [mxlewdmini-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mxlewdmini-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [mxlewdmini-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [mxlewdmini-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mxlewdmini-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [mxlewdmini-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [mxlewdmini-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [mxlewdmini-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MXLewdMini-L2-13B-GGUF/blob/main/mxlewdmini-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MXLewdMini-L2-13B-GGUF and below it, a specific filename to download, such as: mxlewdmini-l2-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MXLewdMini-L2-13B-GGUF mxlewdmini-l2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MXLewdMini-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MXLewdMini-L2-13B-GGUF mxlewdmini-l2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mxlewdmini-l2-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MXLewdMini-L2-13B-GGUF", model_file="mxlewdmini-l2-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Undi's Mxlewdmini L2 13B
Merge:
```shell
[Xwin (0.66) + ReMM (0.33)] x [Xwin (0.33) + MLewd (0.66)]
```
The goal was to recreate https://huggingface.co/Undi95/MXLewd-L2-20B in 13B without using merge interlacing (will probably be a little less good).
<!-- description start -->
## Models used
- Undi95/MLewd-L2-13B-v2-3
- Undi95/ReMM-v2.1-L2-13B
- Xwin-LM/Xwin-LM-13B-V0.1
<!-- description end -->
One part is ReMM (0.33) and Xwin (0.66)
One part is Xwin (0.33) and MLewd (0.66)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- original-model-card end -->
|
TheBloke/DaringFortitude-GGUF | TheBloke | 2023-12-21T13:53:52Z | 676 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:sequelbox/DaringFortitude",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-11-14T21:01:14Z | ---
base_model: sequelbox/DaringFortitude
inference: false
license: llama2
model_creator: scott
model_name: DaringFortitude 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# DaringFortitude 13B - GGUF
- Model creator: [scott](https://huggingface.co/sequelbox)
- Original model: [DaringFortitude 13B](https://huggingface.co/sequelbox/DaringFortitude)
<!-- description start -->
## Description
This repo contains GGUF format model files for [scott's DaringFortitude 13B](https://huggingface.co/sequelbox/DaringFortitude).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/DaringFortitude-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DaringFortitude-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/DaringFortitude-GGUF)
* [scott's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sequelbox/DaringFortitude)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [daringfortitude.Q2_K.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [daringfortitude.Q3_K_S.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [daringfortitude.Q3_K_M.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [daringfortitude.Q3_K_L.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [daringfortitude.Q4_0.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [daringfortitude.Q4_K_S.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [daringfortitude.Q4_K_M.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [daringfortitude.Q5_0.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [daringfortitude.Q5_K_S.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [daringfortitude.Q5_K_M.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [daringfortitude.Q6_K.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [daringfortitude.Q8_0.gguf](https://huggingface.co/TheBloke/DaringFortitude-GGUF/blob/main/daringfortitude.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/DaringFortitude-GGUF and below it, a specific filename to download, such as: daringfortitude.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/DaringFortitude-GGUF daringfortitude.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/DaringFortitude-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/DaringFortitude-GGUF daringfortitude.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m daringfortitude.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./daringfortitude.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./daringfortitude.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: scott's DaringFortitude 13B
Daring Fortitude is a general capability upgrade to Llama 2 13b, using open source data to improve overall knowledge, precise communication, conceptual understanding, and technical skill. (Primary training set is a sub-selection of [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) cleaned further and converted to Llama 2 prompt style.)
This model is primarily recommended as a superior-to-Llama-2 baseline for additional finetuning, not for direct deployment to production as a chat model. The user accepts full responsibility for all outputs.
## Evaluation
Awaiting results from the Open LLM Leaderboard.
<!-- original-model-card end -->
|
allknowingroger/PercivalMelodias-7B-slerp | allknowingroger | 2024-04-10T18:51:19Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"AurelPx/Percival_01-7b-slerp",
"AurelPx/Meliodas-7b-dare",
"base_model:AurelPx/Percival_01-7b-slerp",
"base_model:AurelPx/Meliodas-7b-dare",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T08:36:44Z | ---
tags:
- merge
- mergekit
- lazymergekit
- AurelPx/Percival_01-7b-slerp
- AurelPx/Meliodas-7b-dare
base_model:
- AurelPx/Percival_01-7b-slerp
- AurelPx/Meliodas-7b-dare
license: apache-2.0
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp)
* [AurelPx/Meliodas-7b-dare](https://huggingface.co/AurelPx/Meliodas-7b-dare)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AurelPx/Percival_01-7b-slerp
layer_range: [0, 32]
- model: AurelPx/Meliodas-7b-dare
layer_range: [0, 32]
merge_method: slerp
base_model: AurelPx/Percival_01-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ABX-AI/Spicy-Laymonade-7B | ABX-AI | 2024-05-01T13:08:08Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"conversational",
"base_model:cgato/TheSpice-7b-v0.1.1",
"base_model:ABX-AI/Laymonade-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T16:13:08Z | ---
base_model:
- cgato/TheSpice-7b-v0.1.1
- ABX-AI/Laymonade-7B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: other
---
GGUF: https://huggingface.co/ABX-AI/Spicy-Laymonade-7B-GGUF-IQ-Imatrix

# Spicy-Laymonade-7B
Well, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B.
However, I did try it out, and it seemed to work pretty well.
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1)
* [ABX-AI/Laymonade-7B](https://huggingface.co/ABX-AI/Laymonade-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: cgato/TheSpice-7b-v0.1.1
layer_range: [0, 32]
- model: ABX-AI/Laymonade-7B
layer_range: [0, 32]
merge_method: slerp
base_model: ABX-AI/Laymonade-7B
parameters:
t:
- filter: self_attn
value: [0.7, 0.3, 0.6, 0.2, 0.5]
- filter: mlp
value: [0.3, 0.7, 0.4, 0.8, 0.5]
- value: 0.5
dtype: bfloat16
``` |
arvindanand/ValidateAI-33B-slerp | arvindanand | 2024-04-10T03:39:03Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"arvindanand/Deepseek-Wizard-33B-slerp",
"codefuse-ai/CodeFuse-DeepSeek-33B",
"conversational",
"base_model:arvindanand/Deepseek-Wizard-33B-slerp",
"base_model:codefuse-ai/CodeFuse-DeepSeek-33B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T21:00:29Z | ---
tags:
- merge
- mergekit
- lazymergekit
- arvindanand/Deepseek-Wizard-33B-slerp
- codefuse-ai/CodeFuse-DeepSeek-33B
base_model:
- arvindanand/Deepseek-Wizard-33B-slerp
- codefuse-ai/CodeFuse-DeepSeek-33B
license: apache-2.0
---
# ValidateAI-33B-slerp
ValidateAI-33B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [arvindanand/Deepseek-Wizard-33B-slerp](https://huggingface.co/arvindanand/Deepseek-Wizard-33B-slerp)
* [codefuse-ai/CodeFuse-DeepSeek-33B](https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: arvindanand/Deepseek-Wizard-33B-slerp
layer_range: [0, 32]
- model: codefuse-ai/CodeFuse-DeepSeek-33B
layer_range: [0, 32]
merge_method: slerp
base_model: arvindanand/Deepseek-Wizard-33B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arvindanand/ValidateAI-33B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ALBADDAWI/DeepCode-7B-Aurora | ALBADDAWI | 2024-04-10T14:01:50Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"deepseek-ai/deepseek-math-7b-instruct",
"deepseek-ai/deepseek-math-7b-base",
"deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"conversational",
"base_model:deepseek-ai/deepseek-math-7b-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-10T00:31:39Z | ---
tags:
- deepseek-ai/deepseek-math-7b-instruct
- deepseek-ai/deepseek-math-7b-base
- deepseek-ai/deepseek-coder-7b-instruct-v1.5
base_model:
- deepseek-ai/deepseek-math-7b-instruct
- deepseek-ai/deepseek-math-7b-base
- deepseek-ai/deepseek-coder-7b-instruct-v1.5
license: mit
---
# DeepCode-7B-Aurora
DeepCode-7B-Aurora is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [deepseek-ai/deepseek-math-7b-instruct](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct)
* [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base)
* [deepseek-ai/deepseek-coder-7b-instruct-v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5)
## 🧩 Configuration
```yaml
models:
- model: deepseek-ai/deepseek-math-7b-rl
# No parameters necessary for base model
- model: deepseek-ai/deepseek-math-7b-instruct
parameters:
density: 0.53
weight: 0.4
- model: deepseek-ai/deepseek-math-7b-base
parameters:
density: 0.53
weight: 0.3
- model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: deepseek-ai/deepseek-math-7b-rl
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ALBADDAWI/DeepCode-7B-Aurora"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
netcat420/MFANN3bv0.3 | netcat420 | 2024-04-10T03:31:33Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-04-10T02:05:12Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jukofyork/Eurus-70b-nca-fixed | jukofyork | 2024-04-11T21:46:12Z | 676 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"reasoning",
"preference_learning",
"nca",
"conversational",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraFeedback",
"arxiv:2404.02078",
"arxiv:2402.05369",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T18:42:31Z | ---
license: apache-2.0
datasets:
- openbmb/UltraInteract_pair
- openbmb/UltraFeedback
tags:
- reasoning
- preference_learning
- nca
pipeline_tag: text-generation
---
This is a fixed version of [Eurus-70b-nca](https://huggingface.co/openbmb/Eurus-70b-nca) made by copying the json files from the (**base**) [CodeLlama-70b-hf](https://huggingface.co/codellama/CodeLlama-70b-hf) model and adding in the Mistral chat template, eg:
```
<s>[INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
This version has the same context length (16k) and RoPE base frequency (1000000) as `CodeLlama-70b`:
```
> ./perplexity -m eurus:70b-nca-fixed-q8_0.gguf -f wiki.test.raw -c 4096
Final estimate: PPL = 5.5200 +/- 0.03000
> ./perplexity -m eurus:70b-nca-fixed-q8_0.gguf -f wiki.test.raw -c 16384
Final estimate: PPL = 5.3553 +/- 0.02877
```
I have also tested it with multi-turn conversations for 10k+ context and it has remained perfectly coherent.
It even looks to be fine for use with a context length of 32k:
```
> ./perplexity -m eurus:70b-nca-fixed-q8_0.gguf -f wiki.test.raw -c 32768
Final estimate: PPL = 5.1806 +/- 0.02725
```
---
Also see: [Eurus-70b-sft-fixed](https://huggingface.co/jukofyork/Eurus-70b-sft-fixed)
---
<div align="center">
<img src="https://huggingface.co/openbmb/Eurus-7b-sft/resolve/main/figures/Eurus-logo.png" width="200px">
**Eurus: A suit of open-source LLMs optimized for reasoning**
<p align="center">
<a href="#introduction"> Introduction</a> •
<a href="#evaluation">Evaluation</a>
</p>
</div>
# Links
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
# Introduction
Eurus-70B-NCA is [NCA](https://arxiv.org/abs/2402.05369) fine-tuned from [Eurus-70B-SFT](https://huggingface.co/openbmb/Eurus-70b-sft) on all multi-turn trajectory pairs in [UltraInteract](https://huggingface.co/openbmb/UltraInteract) and all pairs in [UltraFeedback](https://huggingface.co/openbmb/UltraFeedback).
It achieves the best overall performance among open-source models of similar sizes and even outperforms specialized models in corresponding domains in many cases. Notably, Eurus-70B-NCA achieves better performance than GPT-3.5 Turbo through comprehensive benchmarking across 12 tests covering five tasks.
## Usage
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
**Coding**
```
[INST] Write Python code to solve the task:
{Instruction} [/INST]
```
**Math-CoT**
```
[INST] Solve the following math problem step-by-step.
Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
{Instruction} [/INST]
```
**Math-PoT**
```
[INST] Tool available:
[1] Python interpreter
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
Solve the following math problem step-by-step.
Simplify your answer as much as possible.
{Instruction} [/INST]
```
## Evaluation
- Eurus, both the 7B and 70B variants, achieve the best overall performance among open-source models of similar sizes. Eurus even outperforms specialized models in corresponding domains in many cases. Notably, Eurus-7B outperforms baselines that are 5× larger, and Eurus-70B achieves better performance than GPT-3.5 Turbo.
- Preference learning with UltraInteract can further improve performance, especially in math and the multi-turn ability.
<img src="./figures/main_exp.png" alt="stats" style="zoom: 40%;" />
## Citation
```
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
naivecat/cherry_5_7B | naivecat | 2024-04-18T06:30:42Z | 676 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-18T01:52:51Z | ---
license: apache-2.0
---
This model is a fine-tuning of Llama 7B LLM.
This model, known as Llama 7B LLM, is a remarkable achievement in the field of natural language processing. Developed as a successor to its predecessor, Llama 6B LLM, this fine-tuned version exhibits enhanced capabilities and improved performance. Let's delve into the advancements and features of Llama 7B LLM.
One of the key areas of improvement in this model lies in its ability to understand and generate nuanced language. Through extensive training on a diverse range of textual data, Llama 7B LLM has acquired a deeper understanding of context, semantics, and syntax. It can now generate more coherent and contextually relevant responses, making it an even more valuable tool for various applications.---
license: apache-2.0
language:
- en
---
---
license: apache-2.0
--- |
flammenai/flammen20-mistral-7B | flammenai | 2024-04-21T03:05:08Z | 676 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:flammenai/Date-DPO-v1",
"base_model:flammenai/flammen19X-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-21T00:37:17Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen19X-mistral-7B
datasets:
- flammenai/Date-DPO-v1
---

# flammen20-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [flammenai/Date-DPO-v1](https://huggingface.co/datasets/flammenai/Date-DPO-v1).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=8,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=420,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=2048,
max_length=4096,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` |
KingNish/CodeMaster-v1-7b | KingNish | 2024-05-05T17:27:24Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"microsoft/wavecoder-ultra-6.7b",
"base_model:microsoft/wavecoder-ultra-6.7b",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-05T14:47:40Z | ---
tags:
- merge
- mergekit
- lazymergekit
- microsoft/wavecoder-ultra-6.7b
base_model:
- microsoft/wavecoder-ultra-6.7b
- microsoft/wavecoder-ultra-6.7b
license: mit
pipeline_tag: text-generation
---
# CodeMaster v1 7b
CodeMaster v1 7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [microsoft/wavecoder-ultra-6.7b](https://huggingface.co/microsoft/wavecoder-ultra-6.7b)
* [microsoft/wavecoder-ultra-6.7b](https://huggingface.co/microsoft/wavecoder-ultra-6.7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: microsoft/wavecoder-ultra-6.7b
layer_range:
- 0
- 32
- model: microsoft/wavecoder-ultra-6.7b
layer_range:
- 0
- 32
merge_method: slerp
base_model: microsoft/wavecoder-ultra-6.7b
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "KingNish/CodeMaster-v1-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=8096, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
DrNicefellow/GPT-2-Large-115k-steps | DrNicefellow | 2024-05-07T21:18:50Z | 676 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-07T21:16:13Z | ---
license: apache-2.0
---
Self trained GPT-2 large. Around 770M parameters.
The tokenizer is the one from https://huggingface.co/openai-community/gpt2.
It is being trained on around 400B tokens and this is step 115k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License. Well, also MIT License. So both should be followed.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
DreadPoor/GoldenMaiden-7B-model_stock | DreadPoor | 2024-05-11T04:59:24Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-10T21:06:06Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
model-index:
- name: GoldenMaiden-7B-model_stock
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/GoldenMaiden-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/GoldenMaiden-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/GoldenMaiden-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.56
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/GoldenMaiden-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/GoldenMaiden-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/GoldenMaiden-7B-model_stock
name: Open LLM Leaderboard
---
# GoldenMaiden-7B-model_stock
GoldenMaiden-7B-model_stock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: ResplendentAI/Datura_7B
- model: ChaoticNeutrals/Eris_Remix_DPO_7B
- model: Endevor/InfinityRP-v1-7B
- model: BarraHome/Mistroll-7B-v2.2
merge_method: model_stock
base_model: Endevor/InfinityRP-v1-7B
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DreadPoor/GoldenMaiden-7B-model_stock"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__GoldenMaiden-7B-model_stock)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.57|
|AI2 Reasoning Challenge (25-Shot)|73.21|
|HellaSwag (10-Shot) |88.71|
|MMLU (5-Shot) |64.96|
|TruthfulQA (0-shot) |72.56|
|Winogrande (5-shot) |85.16|
|GSM8k (5-shot) |68.84|
|
BTAgent/BTAgent-v0.1 | BTAgent | 2024-05-20T16:18:04Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-18T14:34:52Z | ---
license: apache-2.0
---
BTAgent: Large Language Model for Behavior Tree
Generation with Constrained DPO
This paper introduces BTAgent, an autonomous robot control method based on large language models (LLMs), which can generate robot behavior trees based on the operator's instructions. The main contribution of this paper is to propose a novel approach that combines LLMs and robot agents, leveraging the parsing capabilities of LLMs to generate structured behavior trees and enable task execution. First, we propose a prompt method based on self-instruct style without requiring additional human expert annotations, which uses stage-based and self-reflection prompts to automatically generate behavior tree preference instruction-following datasets. Then, we introduce a constrained DPO (Direct Policy Optimization) method to fine-tune the LLM and enhance its performance. To study the method in depth, we evaluate the generated behavior trees based on StarCraft II simulation environment. Over 95\% average win rate for heterogeneous environments. To the best of our knowledge, this paper is the first study to generate structured behavior trees using LLMs for intelligent agent control in the StarCraft II environment. Furthermore, this work explores the feasibility of LLMs with parameters up to 7B in understanding complex instructions and task generation. We provide download links for both code and dataset, please refer to \url{https://github.com/BTAgent/BTAgent} to obtain related resources. |
ModelsLab/RMBG | ModelsLab | 2024-05-25T08:47:57Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"SegformerForSemanticSegmentation",
"image-segmentation",
"custom_code",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-05-25T08:44:22Z | ---
license: apache-2.0
---
|
RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf | RichardErkhov | 2024-06-02T01:34:58Z | 676 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-02T01:25:44Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nebula-v2-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/Nebula-v2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nebula-v2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Nebula-v2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
| [Nebula-v2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.IQ3_S.gguf) | IQ3_S | 0.19GB |
| [Nebula-v2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q3_K_S.gguf) | Q3_K_S | 0.17GB |
| [Nebula-v2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.IQ3_M.gguf) | IQ3_M | 0.0GB |
| [Nebula-v2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q3_K.gguf) | Q3_K | 0.0GB |
| [Nebula-v2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q3_K_M.gguf) | Q3_K_M | 0.0GB |
| [Nebula-v2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q3_K_L.gguf) | Q3_K_L | 0.0GB |
| [Nebula-v2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.IQ4_XS.gguf) | IQ4_XS | 0.0GB |
| [Nebula-v2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q4_0.gguf) | Q4_0 | 0.0GB |
| [Nebula-v2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.IQ4_NL.gguf) | IQ4_NL | 0.0GB |
| [Nebula-v2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q4_K_S.gguf) | Q4_K_S | 0.0GB |
| [Nebula-v2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q4_K.gguf) | Q4_K | 0.0GB |
| [Nebula-v2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q4_K_M.gguf) | Q4_K_M | 0.0GB |
| [Nebula-v2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q4_1.gguf) | Q4_1 | 0.0GB |
| [Nebula-v2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q5_0.gguf) | Q5_0 | 0.0GB |
| [Nebula-v2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q5_K_S.gguf) | Q5_K_S | 0.0GB |
| [Nebula-v2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q5_K.gguf) | Q5_K | 0.0GB |
| [Nebula-v2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q5_K_M.gguf) | Q5_K_M | 0.0GB |
| [Nebula-v2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q5_1.gguf) | Q5_1 | 0.0GB |
| [Nebula-v2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q6_K.gguf) | Q6_K | 0.0GB |
| [Nebula-v2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-v2-7B-gguf/blob/main/Nebula-v2-7B.Q8_0.gguf) | Q8_0 | 0.0GB |
Original model description:
---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# Nebula-v2-7B
Original weights of Nebula-v2-7B. Finetuned from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Lora Weights
You can access original lora weights from here:
[PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora)
|
hfl/chinese-xlnet-base | hfl | 2021-03-03T01:44:59Z | 675 | 28 | transformers | [
"transformers",
"pytorch",
"tf",
"xlnet",
"text-generation",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- zh
license: "apache-2.0"
---
## Chinese Pre-Trained XLNet
This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.
We welcome all experts and scholars to download and use this model.
This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
PocketDoc/DansHalfbakedAdapters | PocketDoc | 2023-08-25T12:30:29Z | 675 | 3 | null | [
"gguf",
"Llama",
"LoRA",
"text-generation",
"en",
"dataset:PocketDoc/DansTestYard",
"region:us"
] | text-generation | 2023-05-14T05:54:21Z | ---
datasets:
- PocketDoc/DansTestYard
language:
- en
tags:
- Llama
- LoRA
pipeline_tag: text-generation
---
Repository for my in progress LoRA files. |
maddes8cht/openlm-research-open_llama_13b-gguf | maddes8cht | 2023-11-15T11:40:30Z | 675 | 0 | null | [
"gguf",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"region:us"
] | null | 2023-11-14T21:45:42Z | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# open_llama_13b - GGUF
- Model creator: [openlm-research](https://huggingface.co/openlm-research)
- Original model: [open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
OpenLlama is a free reimplementation of the original Llama Model which is licensed under Apache 2 license.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# OpenLLaMA: An Open Reproduction of LLaMA
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
## Dataset and Training
We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.32 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.35 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.37 | 0.34 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.38 | 0.37 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.72 | 0.69 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.68 | 0.65 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.71 | 0.68 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.53 | 0.49 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.72 | 0.67 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.77 | 0.76 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.90 | 0.89 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.60 | 0.58 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.51 | 0.48 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.67 | 0.62 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.55 | 0.53 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
allknowingroger/LadybirdPercival-7B-slerp | allknowingroger | 2024-04-10T18:40:52Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/LadybirdGonzo-7B-slerp",
"Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B",
"base_model:allknowingroger/LadybirdGonzo-7B-slerp",
"base_model:Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-31T16:30:57Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/LadybirdGonzo-7B-slerp
- Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
base_model:
- allknowingroger/LadybirdGonzo-7B-slerp
- Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
license: apache-2.0
---
# LadybirdPercival-7B-slerp
LadybirdPercival-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/LadybirdGonzo-7B-slerp](https://huggingface.co/allknowingroger/LadybirdGonzo-7B-slerp)
* [Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B](https://huggingface.co/Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/LadybirdGonzo-7B-slerp
layer_range: [0, 32]
- model: Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/LadybirdPercival-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MaziyarPanahi/M7Yamshadowexperiment28_Experiment26T3q | MaziyarPanahi | 2024-04-06T20:17:38Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/M7Yamshadowexperiment28-7B",
"base_model:automerger/Experiment26T3q-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-04-06T19:59:56Z | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: M7Yamshadowexperiment28_Experiment26T3q
base_model:
- automerger/M7Yamshadowexperiment28-7B
- automerger/Experiment26T3q-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# M7Yamshadowexperiment28_Experiment26T3q
M7Yamshadowexperiment28_Experiment26T3q is a merge of the following models:
* [automerger/M7Yamshadowexperiment28-7B](https://huggingface.co/automerger/M7Yamshadowexperiment28-7B)
* [automerger/Experiment26T3q-7B](https://huggingface.co/automerger/Experiment26T3q-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/M7Yamshadowexperiment28_Experiment26T3q"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27 | MaziyarPanahi | 2024-04-09T03:00:54Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/YamshadowStrangemerges_32-7B",
"base_model:automerger/Experiment24Ognoexperiment27-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-04-09T02:45:28Z | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: YamshadowStrangemerges_32_Experiment24Ognoexperiment27
base_model:
- automerger/YamshadowStrangemerges_32-7B
- automerger/Experiment24Ognoexperiment27-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# YamshadowStrangemerges_32_Experiment24Ognoexperiment27
YamshadowStrangemerges_32_Experiment24Ognoexperiment27 is a merge of the following models:
* [automerger/YamshadowStrangemerges_32-7B](https://huggingface.co/automerger/YamshadowStrangemerges_32-7B)
* [automerger/Experiment24Ognoexperiment27-7B](https://huggingface.co/automerger/Experiment24Ognoexperiment27-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/YamshadowStrangemerges_32_Experiment24Ognoexperiment27"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Ppoyaa/Niel-7B | Ppoyaa | 2024-04-09T14:46:46Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Ppoyaa/StarMonarch-7B",
"cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"conversational",
"base_model:Ppoyaa/StarMonarch-7B",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-09T14:36:22Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Ppoyaa/StarMonarch-7B
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
base_model:
- Ppoyaa/StarMonarch-7B
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
license: apache-2.0
---
# Niel-7B
Niel-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Ppoyaa/StarMonarch-7B](https://huggingface.co/Ppoyaa/StarMonarch-7B)
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Ppoyaa/StarMonarch-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0, 32]
merge_method: slerp
base_model: Ppoyaa/StarMonarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/Niel-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
birgermoell/llama-3-open-hermes-disco | birgermoell | 2024-04-19T22:34:50Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Muhammad2003/Llama3-8B-OpenHermes-DPO",
"base_model:birgermoell/llama-3-merge-disco-neural-pace",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-19T22:23:21Z | ---
base_model:
- Muhammad2003/Llama3-8B-OpenHermes-DPO
- birgermoell/llama-3-merge-disco-neural-pace
library_name: transformers
tags:
- mergekit
- merge
license: llama2
---
# llama-3-open-hermes-disco
<img src="disco_hermes.png"/>
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Muhammad2003/Llama3-8B-OpenHermes-DPO](https://huggingface.co/Muhammad2003/Llama3-8B-OpenHermes-DPO) as a base.
### Models Merged
The following models were included in the merge:
* [birgermoell/llama-3-merge-disco-neural-pace](https://huggingface.co/birgermoell/llama-3-merge-disco-neural-pace)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Muhammad2003/Llama3-8B-OpenHermes-DPO
- model: birgermoell/llama-3-merge-disco-neural-pace
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: Muhammad2003/Llama3-8B-OpenHermes-DPO
parameters:
int8_mask: true
dtype: bfloat16
``` |
saurav1199/adisesha-phi1.5-7-3-15000 | saurav1199 | 2024-04-20T01:09:58Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"custom_code",
"arxiv:1910.09700",
"license:bigscience-openrail-m",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-20T00:55:29Z | ---
library_name: transformers
license: bigscience-openrail-m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bartowski/Llama-3-Smaug-8B-GGUF | bartowski | 2024-04-21T00:00:18Z | 675 | 17 | transformers | [
"transformers",
"gguf",
"text-generation",
"license:llama2",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-20T05:46:42Z | ---
library_name: transformers
license: llama2
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of Llama-3-Smaug-8B
This model has the <|eot_id|> token set to not-special, which seems to work better with current inference engines.
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> fork from pcuenca <a href="https://github.com/pcuenca/llama.cpp/tree/llama3-conversion">llama3-conversion</a> for quantization.
Original model: https://huggingface.co/abacusai/Llama-3-Smaug-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Smaug-8B-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-Smaug-8B-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-Smaug-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-Smaug-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-Smaug-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-Smaug-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-Smaug-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-Smaug-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-Smaug-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-Smaug-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-Smaug-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-Smaug-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-Smaug-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-Smaug-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-Smaug-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-Smaug-8B-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-Smaug-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-Smaug-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Smaug-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Smaug-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-Smaug-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-Smaug-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
saucam/aqua-smaug-0.3-8B | saucam | 2024-04-22T11:49:01Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"cognitivecomputations/dolphin-2.9-llama3-8b",
"abacusai/Llama-3-Smaug-8B",
"meta-llama/Meta-Llama-3-8B",
"conversational",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"base_model:abacusai/Llama-3-Smaug-8B",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-22T05:19:10Z | ---
tags:
- merge
- mergekit
- cognitivecomputations/dolphin-2.9-llama3-8b
- abacusai/Llama-3-Smaug-8B
- meta-llama/Meta-Llama-3-8B
base_model:
- cognitivecomputations/dolphin-2.9-llama3-8b
- abacusai/Llama-3-Smaug-8B
- meta-llama/Meta-Llama-3-8B
license: apache-2.0
---

# 💦 aqua-smaug-0.3-8B 🐉
aqua-smaug-0.3-8B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit):
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
* [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B)
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
## 🧩 Configuration
```yamlname: aqua-smaug-0.3-8B
models:
- model: cognitivecomputations/dolphin-2.9-llama3-8b
- model: abacusai/Llama-3-Smaug-8B
- model: meta-llama/Meta-Llama-3-8B
merge_method: model_stock
base_model: abacusai/Llama-3-Smaug-8B
dtype: bfloat16
```
## Eval Results
|Benchmark| Model |winogrande| arc |gsm8k|mmlu|truthfulqa|hellaswag|Average|
|---------|--------------------------------------------------------------------|---------:|----:|----:|---:|---------:|--------:|------:|
|openllm |[aqua-smaug-0.3-8B](https://huggingface.co/saucam/aqua-smaug-0.3-8B)| 77.11|62.37|76.19| 66| 53.7| 83.02| 69.73|
Detailed Results: https://github.com/saucam/model_evals/tree/main/saucam/aqua-smaug-0.3-8B
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/aqua-smaug-0.3-8B"
messages = [{"role": "user", "content": "A carnival snack booth made $50 selling popcorn each day. It made three times as much selling cotton candy. For a 5-day activity, the booth has to pay $30 rent and $75 for the cost of the ingredients. How much did the booth earn for 5 days after paying the rent and the cost of ingredients? How much did the booth make selling cotton candy each day?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
output
```
Loading checkpoint shards: 100%|███████████████████████████████████████████████████| 2/2 [00:27<00:00, 13.83s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
A carnival snack booth made $50 selling popcorn each day. It made three times as much selling cotton candy. For a 5-day activity, the booth has to pay $30 rent and $75 for the cost of the ingredients. How much did the booth earn for 5 days after paying the rent and the cost of ingredients? How much did the booth make selling cotton candy each day?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
The carnival snack booth made $50 selling popcorn each day. Since it made three times as much selling cotton candy, it made $50 * 3 = $150 each day selling cotton candy.
For a 5-day activity, the booth made $50 * 5 = $250 selling popcorn and $150 * 5 = $750 selling cotton candy.
The booth has to pay $30 rent and $75 for the cost of the ingredients for 5 days, which is a total of $30 + $75 = $105.
After paying the rent and the cost of ingredients, the booth earned $250 + $750 - $105 = $895 for 5 days.
Therefore, the booth made $150 each day selling cotton candy.
So, the total amount earned by selling popcorn is $250 and by selling cotton candy is $750. After deducting the rent and cost of ingredients, the booth earned a total of $895 for the 5-day activity.
Hope this helps! Let me know if you have any more questions. 😊
### References
- [Carnival Booth Earnings Calculation](https://www.calculator.net/calculators/math/equation-calculator.html) (for verifying calculations)
- [Cotton Candy
``` |
ShenaoZhang/0.001_3iters_bs256_nodpo_only4w_iter_2 | ShenaoZhang | 2024-04-28T08:23:34Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_3iters_bs256_nodpo_only4w_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-28T07:27:59Z | ---
license: mit
base_model: ShenaoZhang/0.001_3iters_bs256_nodpo_only4w_iter_1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_3iters_bs256_nodpo_only4w_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_3iters_bs256_nodpo_only4w_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_3iters_bs256_nodpo_only4w_iter_1](https://huggingface.co/ShenaoZhang/0.001_3iters_bs256_nodpo_only4w_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
maldv/spring-chicken-8x8b | maldv | 2024-05-10T16:50:59Z | 675 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"llama-3",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-05T01:03:34Z | ---
library_name: transformers
tags:
- llama-3
license: cc-by-nc-4.0
---

[GGUF Quants](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF)
# Spring Chicken 8x8b
I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b. There have been some great 4x8b models released recently, so I tried an 8x8b.
```
base_model: ./maldv/spring
gate_mode: hidden
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: ./models/Llama3-ChatQA-1.5-8B
positive_prompts:
- 'add numbers'
- 'solve for x'
negative_prompts:
- 'I love you'
- 'Help me'
- source_model: ./models/InfinityRP-v2-8B
positive_prompts:
- 'they said'
- source_model: ./models/Einstein-v6.1-Llama3-8B
positive_prompts:
- 'the speed of light'
- 'chemical reaction'
- source_model: ./models/Llama-3-Soliloquy-8B-v2
positive_prompts:
- 'write a'
- source_model: ./models/Llama-3-Lumimaid-8B-v0.1
positive_prompts:
- 'she looked'
- source_model: ./models/L3-TheSpice-8b-v0.8.3
positive_prompts:
- 'they felt'
- source_model: ./models/Llama3-OpenBioLLM-8B
positive_prompts:
- 'the correct treatment'
- source_model: ./models/Llama-3-SauerkrautLM-8b-Instruct
positive_prompts:
- 'help me'
- 'should i'
```
### Spring
Spring is a cascading dare-ties merge of the following models:
```python
[
'Einstein-v6.1-Llama3-8B',
'L3-TheSpice-8b-v0.8.3',
'Configurable-Hermes-2-Pro-Llama-3-8B',
'Llama3-ChatQA-1.5-8B',
'Llama3-OpenBioLLM-8B',
'InfinityRP-v2-8B',
'Llama-3-Soliloquy-8B-v2',
'Tiamat-8b-1.2-Llama-3-DPO',
'Llama-3-8B-Instruct-Gradient-1048k',
'Llama-3-Lumimaid-8B-v0.1',
'Llama-3-SauerkrautLM-8b-Instruct',
'Meta-Llama-3-8B-Instruct-DPO',
]
```
I'm finding my iq4_xs to be working well. Llama 3 instruct format works well, but minimal format is highly creative.
## Scores
Not greater than the sum of it's parts, based on the scores; but it is really smart for an emotive RP model.
Metric | Score
---|---
Average | 65.89
ARC | 63.05
HellaSwag | 82.49
MMLU | 64.45
TruthfulQA | 51.63
Winogrande | 76.24
GSM8K | 51.63
[Details](https://huggingface.co/datasets/open-llm-leaderboard/details_maldv__spring-chicken-8x8b) |
adamo1139/Yi-34B-200K-HESOYAM-0905 | adamo1139 | 2024-05-27T21:40:11Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:adamo1139/rawrr_v2-2_stage1",
"dataset:adamo1139/HESOYAM_v0.2",
"arxiv:2403.07691",
"arxiv:2403.03507",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-09T23:24:48Z | ---
license: apache-2.0
datasets:
- adamo1139/rawrr_v2-2_stage1
- adamo1139/HESOYAM_v0.2
---
## Known Issues
<b>There's something weird going on with tokenizer. EXL2 quant works fine in ooba but not in exui. BNB 4-bit quant works fine in ooba. For best results, use ooba with BOS token being inserted, repp 1.05 and probably exllamav2_HF loader over exllamav2</b>
<img src="https://cdn-uploads.huggingface.co/production/uploads/630fdd96a119d49bc1e770d5/BZ1TunduCB0xjfeTCObgL.png" width="600" style="float:center" />
## Model Description
Have you ever wanted a sandbox for text-based social media? A place where you can bully a person, throw arguments or attack someone without any kind of actual harm being done and without any repercussions? All of it fully local, so nobody but you will ever know? No? Well, HESOYAM kinda can do that, but it's not exactly a bully similator, that's just one of ways you could use it. Specify a place on the internet that you want to be in the system prompt and then start a discussion. Will it be engaging or will you be sucked into someone's depression? For now, probably the latter. Still, I had some insightful concrete useful discussions with this model, it's not all gptslopped fluff. It does have a lot of depressive negative tones though, so it might not be for everyone.
To get this model, first, I fine-tuned Yi-34B-200K (xlctx, as in second version of 34B 200K model, not new 1.5) on [adamo1139/rawrr_v2-2_stage1](https://huggingface.co/datasets/adamo1139/rawrr_v2-2_stage1) to make it so that base model will forget it's AI assistant programming and behave like a completion model trained on raw corpus of internet. This was done using [ORPO](https://arxiv.org/abs/2403.07691) and [GaLore](https://arxiv.org/abs/2403.03507) - all of it handled by [Unsloth](https://github.com/unslothai/unsloth). I would say it's a moderately successful finetune, I plan to enhance rawrr dataset with richer data to make better finetunes of this kind in the future. Resulting adapter file can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-RAW-ORPO-0805-GaLore-PEFT) and FP16 model file for RAWrr ORPO finetune can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-RAW-ORPO-0805-GaLore).
Once I had good base model, I fine-tuned it on [HESOYAM 0.2](https://huggingface.co/datasets/adamo1139/HESOYAM_v0.2) dataset. It's a collection of single turn conversations from around 10 subreddits and multi-turn conversations from board /x/. There's also pippa in there. All samples there have system prompts that should tell the model about where discussion is taking place, this will be useful when you will be deciding on where you want to have your sandbox discussion take place. Here, I used classic SFT with GaLore and Unsloth, I wanted to get some results quick so it's trained for just 0.4 epochs. Adapter after that part of fine-tuning can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-HESOYAM-RAW-0905-GaLore-PEFT).
[Conversation samples](https://huggingface.co/datasets/adamo1139/misc/blob/main/benchmarks/yi-34b-200k-xlctx-hesoyam-raw-0905/hesoyam_0905_samples.txt) - I put in a seed prompt and let the model generate the rest of the conversation.
[Results on my base benchmarks](https://huggingface.co/datasets/adamo1139/misc/blob/main/benchmarks/yi-34b-200k-xlctx-hesoyam-raw-0905/benchmark_prompts.txt) - Responses suggests it still has some general assistant capabilities. I don't really want that, maybe I should up the learning rate for next run so that it stays in character more.
## Prompt template
It's chatml, like always.
```
<|im_start|>system
A chat on subreddit /r/pcmasterrace.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Quants
I haven't done them yet. I will maybe upload one EXL2 quant.
## Intended uses & limitations
Use is limited by apache-2.0 license.
## Credits
Thanks to unsloth and huggingface team for providing software packages used during fine-tuning. \
Thanks to authors of ORPO and GaLore for their innovative fine-tuning strategies. \
Thanks to random people who post datasets on hf, you rock!
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" alt="made with Unsloth" width="400" height="64"/>](https://github.com/unslothai/unsloth) |
win10/Meta-Llama-3-15B-Instruct | win10 | 2024-05-23T10:18:24Z | 675 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pytorch",
"llama-3",
"mergekit",
"merge",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-13T14:29:24Z | ---
library_name: transformers
language:
- en
pipeline_tag: text-generation
tags:
- pytorch
- llama
- llama-3
- mergekit
- merge
license: llama3
---
# llama3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # embed_tokens comes along with the ride with whatever is the first layer
layer_range: [0, 1]
- model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
layer_range: [0, 1]
- sources:
- model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
layer_range: [1, 24]
- sources:
- model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
layer_range: [8, 20]
- sources:
- model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
layer_range: [18, 32]
- model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
layer_range: [18, 32]
merge_method: passthrough
dtype: bfloat16
```
|
johnsutor/mixture-of-gemmas-slerp | johnsutor | 2024-05-28T13:36:55Z | 675 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"base_model:google/gemma-7b",
"base_model:google/codegemma-7b",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-28T13:27:39Z | ---
base_model:
- google/gemma-7b
- google/codegemma-7b
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# slerp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [google/gemma-7b](https://huggingface.co/google/gemma-7b)
* [google/codegemma-7b](https://huggingface.co/google/codegemma-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: google/gemma-7b
- model: google/codegemma-7b
merge_method: slerp
base_model: google/gemma-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
flammenai/Mahou-1.3-mistral-7B | flammenai | 2024-05-29T01:28:23Z | 675 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:flammenai/MahouMix-v1",
"base_model:nbeerbower/Flammen-Mahou-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-29T00:27:10Z | ---
library_name: transformers
license: apache-2.0
base_model:
- nbeerbower/Flammen-Mahou-mistral-7B
datasets:
- flammenai/MahouMix-v1
---

# Mahou-1.3-mistral-7B
Mahou is our attempt to build a production-ready conversational/roleplay LLM.
Future versions will be released iteratively and finetuned from flammen.ai conversational data.
### Chat Format
This model has been trained to use ChatML format. Note the additional tokens in [tokenizer_config.json](tokenizer_config.json).
```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```
### Roleplay Format
- Speech without quotes.
- Actions in `*asterisks*`
```
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
```
### ST Settings
1. Use ChatML for the Context Template.
2. Enable Instruct Mode.
3. Use the [Mahou preset](https://huggingface.co/datasets/flammenai/Mahou-ST-ChatML-Instruct/raw/main/Mahou.json).
4. Recommended: Add newline as a stopping string: `["\n"]`
### Method
Finetuned for 10 epochs using an A100 on Google Colab.
[Fine-tune Llama 3 with ORPO](https://huggingface.co/blog/mlabonne/orpo-llama-3) - [Maxime Labonne](https://huggingface.co/mlabonne) |
RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf | RichardErkhov | 2024-06-03T17:40:09Z | 675 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-03T09:09:04Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BigMaid-20B-v1.0 - GGUF
- Model creator: https://huggingface.co/TeeZee/
- Original model: https://huggingface.co/TeeZee/BigMaid-20B-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [BigMaid-20B-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q2_K.gguf) | Q2_K | 6.91GB |
| [BigMaid-20B-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [BigMaid-20B-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [BigMaid-20B-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [BigMaid-20B-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [BigMaid-20B-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q3_K.gguf) | Q3_K | 9.04GB |
| [BigMaid-20B-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [BigMaid-20B-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [BigMaid-20B-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [BigMaid-20B-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q4_0.gguf) | Q4_0 | 10.52GB |
| [BigMaid-20B-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [BigMaid-20B-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [BigMaid-20B-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q4_K.gguf) | Q4_K | 11.22GB |
| [BigMaid-20B-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [BigMaid-20B-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q4_1.gguf) | Q4_1 | 11.67GB |
| [BigMaid-20B-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q5_0.gguf) | Q5_0 | 12.83GB |
| [BigMaid-20B-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [BigMaid-20B-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q5_K.gguf) | Q5_K | 13.18GB |
| [BigMaid-20B-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [BigMaid-20B-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q5_1.gguf) | Q5_1 | 13.98GB |
| [BigMaid-20B-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q6_K.gguf) | Q6_K | 15.28GB |
| [BigMaid-20B-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_BigMaid-20B-v1.0-gguf/blob/main/BigMaid-20B-v1.0.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- roleplay
- text-generation-inference
- merge
- not-for-all-audiences
model-index:
- name: BigMaid-20B-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
---
# BigMaid-20B-v1.0

## Model Details
- A result of interleaving layers of [KatyTheCutie/EstopianMaid-13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B) with itself.
- The resulting model has approximately 20 billion parameters.
- See [mergekit-config.yml](https://huggingface.co/TeeZee/BigMaid-20B-v1.0/resolve/main/mergekit-config.yml) for details on the merge method used.
**Warning: This model can produce NSFW content!**
## Results
- Bigger version of original, uncensored like oryginal.
- Retains all good qualities of original with additional affinity for abstract and lighthearted humor
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__BigMaid-20B-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.07|
|AI2 Reasoning Challenge (25-Shot)|61.35|
|HellaSwag (10-Shot) |85.26|
|MMLU (5-Shot) |57.15|
|TruthfulQA (0-shot) |55.29|
|Winogrande (5-shot) |75.30|
|GSM8k (5-shot) | 2.05|
|
PrunaAI/cognitivecomputations-dolphin-2.9.2-qwen2-72b-GGUF-smashed | PrunaAI | 2024-06-08T08:49:55Z | 675 | 0 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-06-08T01:44:25Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the cognitivecomputations/dolphin-2.9.2-qwen2-72b model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: cognitivecomputations-dolphin-2.9.2-qwen2-72b-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download cognitivecomputations-dolphin-2.9.2-qwen2-72b-GGUF-smashed dolphin-2.9.2-qwen2-72b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download cognitivecomputations-dolphin-2.9.2-qwen2-72b-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download cognitivecomputations-dolphin-2.9.2-qwen2-72b-GGUF-smashed dolphin-2.9.2-qwen2-72b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m dolphin-2.9.2-qwen2-72b.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./dolphin-2.9.2-qwen2-72b.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./dolphin-2.9.2-qwen2-72b.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
elyza/Llama-3-ELYZA-JP-8B-AWQ | elyza | 2024-06-26T02:56:39Z | 675 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ja",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-06-25T04:31:31Z | ---
library_name: transformers
license: llama3
language:
- ja
- en
---
# Llama-3-ELYZA-JP-8B-AWQ

## Model Description
**Llama-3-ELYZA-JP-8B** is a large language model trained by [ELYZA, Inc](https://elyza.ai/).
Based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), it has been enhanced for Japanese usage through additional pre-training and instruction tuning. (Built with Meta Llama3)
For more details, please refer to [our blog post](https://note.com/elyza/n/n360b6084fdbd).
## Quantization
We have prepared two quantized model options, GGUF and AWQ. This is the [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) model.
The following table shows the performance degradation due to quantization:
| Model | ELYZA-tasks-100 GPT4 score |
| :-------------------------------- | ---: |
| [Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) | 3.655 |
| [Llama-3-ELYZA-JP-8B-GGUF (Q4_K_M)](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-GGUF) | 3.57 |
| [Llama-3-ELYZA-JP-8B-AWQ](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-AWQ) | 3.39 |
## Use with vLLM
Install vLLM:
```bash
pip install vllm
```
### vLLM Offline Batched Inference
```python
from vllm import LLM, SamplingParams
llm = LLM(model="elyza/Llama-3-ELYZA-JP-8B-AWQ", quantization="awq")
tokenizer = llm.get_tokenizer()
DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。特に指示が無い場合は、常に日本語で回答してください。"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=1000)
messages_batch = [
[
{"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
{"role": "user", "content": "古代ギリシャを学ぶ上で知っておくべきポイントは?"}
],
[
{"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
{"role": "user", "content": "クマが海辺に行ってアザラシと友達になり、最終的には家に帰るというプロットの短編小説を書いてください。"}
]
]
prompts = [
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
for messages in messages_batch
]
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
print(output.outputs[0].text)
print("=" * 50)
```
### vLLM OpenAI Compatible Server
Start the API server:
```bash
python -m vllm.entrypoints.openai.api_server \
--model elyza/Llama-3-ELYZA-JP-8B-AWQ \
--port 8000 \
--host localhost \
--quantization awq
```
Call the API using curl:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "elyza/Llama-3-ELYZA-JP-8B-AWQ",
"messages": [
{ "role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。特に指示が無い場合は、常に日本語で回答してください。" },
{ "role": "user", "content": "古代ギリシャを学ぶ上で知っておくべきポイントは?" }
],
"temperature": 0.6,
"max_tokens": 1000,
"stream": false
}'
```
Call the API using Python:
```python
import openai
client = openai.OpenAI(
base_url="http://localhost:8000/v1",
api_key = "dummy_api_key"
)
completion = client.chat.completions.create(
model="elyza/Llama-3-ELYZA-JP-8B-AWQ",
messages=[
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。特に指示が無い場合は、常に日本語で回答してください。"},
{"role": "user", "content": "古代ギリシャを学ぶ上で知っておくべきポイントは?"}
]
)
```
## Developers
Listed in alphabetical order.
- [Masato Hirakawa](https://huggingface.co/m-hirakawa)
- [Shintaro Horie](https://huggingface.co/e-mon)
- [Tomoaki Nakamura](https://huggingface.co/tyoyo)
- [Daisuke Oba](https://huggingface.co/daisuk30ba)
- [Sam Passaglia](https://huggingface.co/passaglia)
- [Akira Sasaki](https://huggingface.co/akirasasaki)
## License
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
## How to Cite
```tex
@misc{elyzallama2024,
title={elyza/Llama-3-ELYZA-JP-8B},
url={https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B},
author={Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Daisuke Oba and Sam Passaglia and Akira Sasaki},
year={2024},
}
```
## Citations
```tex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
|
newtonkwan/gpt2-xl-ft-4 | newtonkwan | 2022-03-17T16:38:08Z | 674 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-17T15:00:34Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-4
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 15 | 3.5549 |
| No log | 1.96 | 30 | 1.4216 |
| No log | 2.96 | 45 | 1.2969 |
| No log | 3.96 | 60 | 1.2823 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 35.67070770263672
### Dataset Size
Size: 5000 |
timm/regnetx_320.tv2_in1k | timm | 2024-02-10T23:33:04Z | 674 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2003.13678",
"license:bsd-3-clause",
"region:us"
] | image-classification | 2023-03-21T06:36:08Z | ---
license: bsd-3-clause
library_name: timm
tags:
- image-classification
- timm
---
# Model card for regnetx_320.tv2_in1k
A RegNetX-32GF image classification model. Pretrained on ImageNet-1k by torchvision contributors (see ImageNet1K-V2 weight details https://github.com/pytorch/vision/issues/3995#new-recipe).
The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* configurable output stride (dilation)
* configurable activation and norm layers
* option for a pre-activation bottleneck block used in RegNetV variant
* only known RegNetZ model definitions with pretrained weights
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 107.8
- GMACs: 31.8
- Activations (M): 36.3
- Image size: 224 x 224
- **Papers:**
- Designing Network Design Spaces: https://arxiv.org/abs/2003.13678
- **Original:** https://github.com/pytorch/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('regnetx_320.tv2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnetx_320.tv2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 336, 56, 56])
# torch.Size([1, 672, 28, 28])
# torch.Size([1, 1344, 14, 14])
# torch.Size([1, 2520, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnetx_320.tv2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2520, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`.
|model |img_size|top1 |top5 |param_count|gmacs|macts |
|-------------------------|--------|------|------|-----------|-----|------|
|[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 |
|[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 |
|[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 |
|[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 |
|[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49|
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 |
|[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 |
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 |
|[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 |
|[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83|
|[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 |
|[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 |
|[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 |
|[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 |
|[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 |
|[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 |
|[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 |
|[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 |
|[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 |
|[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 |
|[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 |
|[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 |
|[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 |
|[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 |
|[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 |
|[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 |
|[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 |
|[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 |
|[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 |
|[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 |
|[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 |
|[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 |
|[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 |
|[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 |
|[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 |
|[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 |
|[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 |
|[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 |
|[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 |
|[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 |
|[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 |
|[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 |
|[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 |
|[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 |
|[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 |
|[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 |
|[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 |
|[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 |
|[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 |
|[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 |
## Citation
```bibtex
@InProceedings{Radosavovic2020,
title = {Designing Network Design Spaces},
author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r},
booktitle = {CVPR},
year = {2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/ghostnetv2_160.in1k | timm | 2023-08-20T06:13:46Z | 674 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2211.12905",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-08-20T06:13:27Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for ghostnetv2_160.in1k
A GhostNetV2 image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 12.4
- GMACs: 0.4
- Activations (M): 7.2
- Image size: 224 x 224
- **Papers:**
- GhostNetV2: Enhance Cheap Operation with Long-Range Attention: https://arxiv.org/abs/2211.12905
- **Original:** https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('ghostnetv2_160.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ghostnetv2_160.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 24, 112, 112])
# torch.Size([1, 40, 56, 56])
# torch.Size([1, 64, 28, 28])
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 256, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ghostnetv2_160.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{tang2022ghostnetv2,
title={GhostNetv2: enhance cheap operation with long-range attention},
author={Tang, Yehui and Han, Kai and Guo, Jianyuan and Xu, Chang and Xu, Chao and Wang, Yunhe},
journal={Advances in Neural Information Processing Systems},
volume={35},
pages={9969--9982},
year={2022}
}
```
|
TheBloke/Huginn-22B-Prototype-GGUF | TheBloke | 2023-09-27T13:02:07Z | 674 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:The-Face-Of-Goonery/Huginn-22b-Prototype",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-08-27T09:20:09Z | ---
license: llama2
model_name: Huginn 22B Prototype
inference: false
model_creator: Caleb Morgan
model_link: https://huggingface.co/The-Face-Of-Goonery/Huginn-22b-Prototype
model_type: llama
quantized_by: TheBloke
base_model: The-Face-Of-Goonery/Huginn-22b-Prototype
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Huginn 22B Prototype - GGUF
- Model creator: [Caleb Morgan](https://huggingface.co/The-Face-Of-Goonery)
- Original model: [Huginn 22B Prototype](https://huggingface.co/The-Face-Of-Goonery/Huginn-22b-Prototype)
## Description
This repo contains GGUF format model files for [Caleb Morgan's Huginn 22B Prototype](https://huggingface.co/The-Face-Of-Goonery/Huginn-22b-Prototype).
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
Here are a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGML)
* [Caleb Morgan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Huginn-22b-Prototype)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [huginn-22b-prototype.Q2_K.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q2_K.gguf) | Q2_K | 2 | 9.08 GB| 11.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [huginn-22b-prototype.Q3_K_S.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q3_K_S.gguf) | Q3_K_S | 3 | 9.47 GB| 11.97 GB | very small, high quality loss |
| [huginn-22b-prototype.Q3_K_M.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q3_K_M.gguf) | Q3_K_M | 3 | 10.61 GB| 13.11 GB | very small, high quality loss |
| [huginn-22b-prototype.Q3_K_L.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q3_K_L.gguf) | Q3_K_L | 3 | 11.61 GB| 14.11 GB | small, substantial quality loss |
| [huginn-22b-prototype.Q4_0.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q4_0.gguf) | Q4_0 | 4 | 12.34 GB| 14.84 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [huginn-22b-prototype.Q4_K_S.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q4_K_S.gguf) | Q4_K_S | 4 | 12.42 GB| 14.92 GB | small, greater quality loss |
| [huginn-22b-prototype.Q4_K_M.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q4_K_M.gguf) | Q4_K_M | 4 | 13.18 GB| 15.68 GB | medium, balanced quality - recommended |
| [huginn-22b-prototype.Q5_0.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q5_0.gguf) | Q5_0 | 5 | 15.04 GB| 17.54 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [huginn-22b-prototype.Q5_K_S.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q5_K_S.gguf) | Q5_K_S | 5 | 15.04 GB| 17.54 GB | large, low quality loss - recommended |
| [huginn-22b-prototype.Q5_K_M.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q5_K_M.gguf) | Q5_K_M | 5 | 15.47 GB| 17.97 GB | large, very low quality loss - recommended |
| [huginn-22b-prototype.Q6_K.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q6_K.gguf) | Q6_K | 6 | 17.91 GB| 20.41 GB | very large, extremely low quality loss |
| [huginn-22b-prototype.Q8_0.gguf](https://huggingface.co/TheBloke/Huginn-22B-Prototype-GGUF/blob/main/huginn-22b-prototype.Q8_0.gguf) | Q8_0 | 8 | 23.19 GB| 25.69 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
```
./main -t 10 -ngl 32 -m huginn-22b-prototype.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Huginn-22B-Prototype-GGUF", model_file="huginn-22b-prototype.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Caleb Morgan's Huginn 22B Prototype
prototype of https://huggingface.co/upstage/llama-30b-instruct-2048 merged with huginn v3 using chargoddard's frankenllama script
model has not been finetuned but seems functional from testing so far, I plan on finetuning it later, I'm just uploading the prototype so I can distribute it to testers
still uses alpaca format, or chat
<!-- original-model-card end -->
|
mys/ggml_CLIP-ViT-L-14-laion2B-s32B-b82K | mys | 2023-09-27T08:39:31Z | 674 | 1 | null | [
"gguf",
"clip",
"vision",
"ggml",
"clip.cpp",
"clip-cpp-gguf",
"license:mit",
"region:us"
] | null | 2023-09-27T08:29:58Z | ---
license: mit
tags:
- clip
- vision
- ggml
- clip.cpp
- clip-cpp-gguf
---
## Converted files for use with clip.cpp
see https://github.com/monatis/clip.cpp
# Experimental
the file format is not stable yet, so expect breaking changes. I will update the files from time to time.
|
mys/ggml_clip-vit-base-patch32 | mys | 2023-09-27T08:45:21Z | 674 | 0 | null | [
"gguf",
"clip",
"vision",
"ggml",
"clip.cpp",
"clip-cpp-gguf",
"license:mit",
"region:us"
] | null | 2023-09-27T08:41:51Z | ---
license: mit
tags:
- clip
- vision
- ggml
- clip.cpp
- clip-cpp-gguf
---
## Converted files for use with clip.cpp
see https://github.com/monatis/clip.cpp
# Experimental
the file format is not stable yet, so expect breaking changes. I will update the files from time to time.
|
hotshotco/SDXL-512 | hotshotco | 2023-10-07T14:43:07Z | 674 | 47 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2023-10-03T08:30:59Z | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
---

<hr>
# Overview
SDXL-512 is a checkpoint fine-tuned from SDXL 1.0 that is designed to more simply generate higher-fidelity images at and around the 512x512 resolution. The model has been fine-tuned using a learning rate of 1e-6 over 7000 steps with a batch size of 64 on a curated dataset of multiple aspect ratios. alternating low and high resolution batches (per aspect ratio) so as not to impair the base model's existing performance at higher resolution.
*Note:* It bears repeating that SDXL-512 was not trained to be "better" than SDXL, but rather to simplify prompting for higher-fidelity outputs at and around the 512x512 resolution.
- **Use it with [Hotshot-XL](https://huggingface.co/hotshotco/Hotshot-XL) (recommended)**
<hr>
# Model Description
- **Developed by**: Natural Synthetics Inc.
- **Model type**: Diffusion-based text-to-image generative model
- **License**: CreativeML Open RAIL++-M License
- **Model Description**: This is a model that can be used to generate and modify higher-fidelity images at and around the 512x512 resolution.
- **Resources for more information**: Check out our [GitHub Repository](https://github.com/hotshotco/Hotshot-XL).
- **Finetuned from model**: [Stable Diffusion XL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
<hr>
# 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.18.2:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default **EulerDiscreteScheduler** in this example we are swapping it to **EulerAncestralDiscreteScheduler**:
```py
from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler
pipe = StableDiffusionXLPipeline.from_pretrained(
"hotshotco/SDXL-512",
use_safetensors=True,
).to('cuda')
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
prompt = "a woman laughing"
negative_prompt = ""
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=512,
height=512,
target_size=(1024, 1024),
original_size=(4096, 4096),
num_inference_steps=50
).images[0]
image.save("woman_laughing.png")
```
<hr>
# Limitations and Bias
## Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
## Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
bartowski/internlm2-chat-7b-sft-llama | bartowski | 2024-01-18T16:47:52Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-18T16:45:00Z | ---
pipeline_tag: text-generation
license: other
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM)
</div>
## Converted using <a href="https://huggingface.co/chargoddard">Charles Goddard's</a> conversion script to create llama models from internlm
Original REPO link: https://huggingface.co/internlm/internlm2-chat-7b-sft
ExLlamaV2 quants: https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.