modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ilsp/Meltemi-7B-v1 | ilsp | 2024-06-17T11:53:19Z | 674 | 44 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"el",
"en",
"arxiv:1803.05457",
"arxiv:2109.07958",
"arxiv:1905.07830",
"arxiv:2009.03300",
"arxiv:2308.16884",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-22T12:59:01Z | ---
license: apache-2.0
language:
- el
- en
library_name: transformers
pipeline_tag: text-generation
---
# Meltemi: A large foundation Language Model for the Greek language
We introduce Meltemi, the first Greek Large Language Model (LLM) trained by the [Institute for Language and Speech Processing](https://www.athenarc.gr/en/ilsp) at [Athena Research & Innovation Center](https://www.athenarc.gr/en).
Meltemi is built on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), extending its capabilities for Greek through continual pretraining on a large corpus of high-quality and locally relevant Greek texts. We present Meltemi-7B-v1, as well as an instruction fine-tuned version [Meltemi-7B-Instruct-v1](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1).

# Model Information
- Vocabulary extension of the Mistral-7B tokenizer with Greek tokens
- 8192 context length
- We extend the pretraining of Mistral-7B with added proficiency for the Greek language, by utilizing a large corpus consisting of approximately **40 billion tokens**.
* This corpus includes 28.5 billion monolingual Greek tokens, constructed from publicly available resources. Additionaly, to mitigate catastrophic forgetting and ensure that the model has bilingual capabilities, we use additional sub-corpora with monolingual English texts (10.5 billion tokens) and Greek-English parallel data (600 million tokens).
* This corpus has been processed, filtered, and deduplicated to ensure data quality (a detailed description of our data processing pipeline will be published in our upcoming paper) and is outlined below:
| Sub-corpus | # Tokens | Percentage |
|----------|------------------|------------|
| Greek | 28,555,902,360 | 72.0% |
| English | 10,478,414,033 | 26.4% |
| Parallel | 633,816,023 | 1.6% |
| **Total** | **39,668,132,416** | **100%** |
# Usage
Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine-tuning frameworks.
# Evaluation
The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).
Our evaluation suite includes:
* Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)).
* An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884))
* A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)).
Our evaluation for Meltemi-7B is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We can see that our training enhances performance across all Greek test sets by a **+14.9%** average improvement. The results for the Greek test sets are shown in the following table:
| | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average |
|----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------|
| Mistral 7B | 29.8% | 45.0% | 36.5% | 27.1% | 45.8% | 35% | 36.5% |
| Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% |
# Ethical Considerations
This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.
# Acknowledgements
The ILSP team utilized Amazon's cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community. |
allknowingroger/StrangeMonarch-7B-slerp | allknowingroger | 2024-04-10T18:56:31Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"abideen/AlphaMonarch-daser",
"Gille/StrangeMerges_44-7B-dare_ties",
"conversational",
"base_model:abideen/AlphaMonarch-daser",
"base_model:Gille/StrangeMerges_44-7B-dare_ties",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-28T07:19:21Z | ---
tags:
- merge
- mergekit
- lazymergekit
- abideen/AlphaMonarch-daser
- Gille/StrangeMerges_44-7B-dare_ties
base_model:
- abideen/AlphaMonarch-daser
- Gille/StrangeMerges_44-7B-dare_ties
license: apache-2.0
---
# StrangeMonarch-7B-slerp
StrangeMonarch-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [abideen/AlphaMonarch-daser](https://huggingface.co/abideen/AlphaMonarch-daser)
* [Gille/StrangeMerges_44-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_44-7B-dare_ties)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: abideen/AlphaMonarch-daser
layer_range: [0, 32]
- model: Gille/StrangeMerges_44-7B-dare_ties
layer_range: [0, 32]
merge_method: slerp
base_model: abideen/AlphaMonarch-daser
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/StrangeMonarch-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
nbeerbower/Suppe-v1-7B | nbeerbower | 2024-04-03T02:28:21Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:mlabonne/Zebrafish-7B",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:automerger/YamshadowExperiment28-7B",
"base_model:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T22:13:50Z | ---
license: apache-2.0
base_model:
- mlabonne/AlphaMonarch-7B
- mlabonne/Zebrafish-7B
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- automerger/YamshadowExperiment28-7B
- yam-peleg/Experiment26-7B
library_name: transformers
tags:
- mergekit
- merge
---
# Suppe-v1-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base.
### Models Merged
The following models were included in the merge:
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [mlabonne/Zebrafish-7B](https://huggingface.co/mlabonne/Zebrafish-7B)
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: automerger/YamshadowExperiment28-7B
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- model: mlabonne/Zebrafish-7B
- model: mlabonne/AlphaMonarch-7B
merge_method: model_stock
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16
```
|
arvindanand/Deepseek-Wizard-33B-slerp | arvindanand | 2024-04-10T03:38:54Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"deepseek-ai/deepseek-coder-33b-instruct",
"WizardLM/WizardCoder-33B-V1.1",
"conversational",
"base_model:deepseek-ai/deepseek-coder-33b-instruct",
"base_model:WizardLM/WizardCoder-33B-V1.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T05:54:25Z | ---
tags:
- merge
- mergekit
- lazymergekit
- deepseek-ai/deepseek-coder-33b-instruct
- WizardLM/WizardCoder-33B-V1.1
base_model:
- deepseek-ai/deepseek-coder-33b-instruct
- WizardLM/WizardCoder-33B-V1.1
license: apache-2.0
---
# Deepseek-Wizard-33B-slerp
Deepseek-Wizard-33B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [deepseek-ai/deepseek-coder-33b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct)
* [WizardLM/WizardCoder-33B-V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: deepseek-ai/deepseek-coder-33b-instruct
layer_range: [0, 32]
- model: WizardLM/WizardCoder-33B-V1.1
layer_range: [0, 32]
merge_method: slerp
base_model: deepseek-ai/deepseek-coder-33b-instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arvind22/Deepseek-Wizard-33B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
KnutJaegersberg/Deita-32b | KnutJaegersberg | 2024-04-09T07:31:52Z | 674 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-07T12:54:52Z | ---
license: other
license_name: qwen
license_link: LICENSE
model-index:
- name: Deita-32b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### User:
How do you fine tune a large language model?
### Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deita-32b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.16|
|AI2 Reasoning Challenge (25-Shot)|63.05|
|HellaSwag (10-Shot) |84.67|
|MMLU (5-Shot) |73.95|
|TruthfulQA (0-shot) |58.11|
|Winogrande (5-shot) |80.82|
|GSM8k (5-shot) |72.33|
|
netcat420/MFANNv0.4 | netcat420 | 2024-04-09T22:02:03Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-04-09T02:40:09Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
72.36 <- avg 69.54 <- ARC 86.66 <- Hellaswag 63.27 <- MMLU 71.4 <- TruthfulQA 79.48 <- Winogrande 63.84 <- GSM8K

|
DrNicefellow/Mistral-6-from-Mixtral-8x7B-v0.1 | DrNicefellow | 2024-04-12T16:24:16Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T12:07:47Z | ---
license: apache-2.0
---
# Mixtral-8x7B--v0.1: Model 6
## Model Description
This model is the 6th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B.
## Model Architecture
The architecture of this model includes:
- Multi-head attention layers derived from the base Mixtral model.
- The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks.
- Additional layers and components as required to ensure the model's functionality outside the MoE framework.
### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DrNicefellow/Mistral-6-from-Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
text = "Today is a pleasant"
input_ids = tokenizer.encode(text, return_tensors='pt')
output = model.generate(input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
|
jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 | jpacifico | 2024-04-28T11:24:23Z | 674 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"french",
"llama-3-8B",
"conversational",
"fr",
"en",
"dataset:jpacifico/French-Alpaca-dataset-Instruct-110K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-19T14:03:02Z | ---
library_name: transformers
license: apache-2.0
language:
- fr
- en
datasets:
- jpacifico/French-Alpaca-dataset-Instruct-110K
tags:
- llama3
- french
- llama-3-8B
---
## Model Card for Model ID
French-Alpaca based on Llama3-8B-Instruct

### Model Description
fine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo.
French-Alpaca is a general model and can itself be finetuned to be specialized for specific use cases.
The fine-tuning method is inspired from https://crfm.stanford.edu/2023/03/13/alpaca.html
Quantized Q4_K_M GGUF 4bits version available : jpacifico/french-alpaca-llama3-8B-Q4-GGUF
### Usage
```python
model_id = "jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0"
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(model_id, add_eos_token=True, padding_side='left')
streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
def stream_frenchalpaca(user_prompt):
runtimeFlag = "cuda:0"
system_prompt = 'Tu trouveras ci-dessous une instruction qui décrit une tâche. Rédige une réponse qui complète de manière appropriée la demande.\n\n'
B_INST, E_INST = "### Instruction:\n", "### Response:\n"
prompt = f"{system_prompt}{B_INST}{user_prompt.strip()}\n\n{E_INST}"
inputs = tokenizer([prompt], return_tensors="pt").to(runtimeFlag)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=500)
stream_frenchalpaca("your prompt here")
```
Colab notebook available on my Github :
https://github.com/jpacifico/French-Alpaca/blob/main/French_Alpaca_Llama3_inference_test_colab.ipynb
### Limitations
The French-Alpaca model is a quick demonstration that a base 8B model can be easily fine-tuned to specialize in a particular language.
It does not have any moderation mechanisms.
- **Developed by:** Jonathan Pacifico, 2024
- **Model type:** LLM
- **Language(s) (NLP):** French
- **License:** MIT |
misri/juggernautXL_juggernautX | misri | 2024-04-20T15:50:14Z | 674 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-04-20T14:20:42Z | ---
license: unknown
---
|
YeungNLP/firefly-qwen1.5-en-14b-dpo-v0.1 | YeungNLP | 2024-04-24T17:21:19Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-24T16:24:11Z | ---
library_name: transformers
license: apache-2.0
basemodel: Qwen/Qwen1.5-14B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuma42/KangalKhan-Alpha-RawRubyroid-7B-Fixed | Yuma42 | 2024-04-25T19:34:49Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed",
"Yuma42/KangalKhan-RawEmerald-7B",
"conversational",
"en",
"base_model:Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed",
"base_model:Yuma42/KangalKhan-RawEmerald-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-25T19:27:50Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed
- Yuma42/KangalKhan-RawEmerald-7B
base_model:
- Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed
- Yuma42/KangalKhan-RawEmerald-7B
license: apache-2.0
language:
- en
---
# KangalKhan-Alpha-RawRubyroid-7B-Fixed
KangalKhan-Alpha-RawRubyroid-7B-Fixed is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed](https://huggingface.co/Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed)
* [Yuma42/KangalKhan-RawEmerald-7B](https://huggingface.co/Yuma42/KangalKhan-RawEmerald-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed
layer_range: [0, 32]
- model: Yuma42/KangalKhan-RawEmerald-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Yuma42/KangalKhan-Alpha-Rubyroid-7B-Fixed
parameters:
t:
- filter: self_attn
value: [0.1, 0.55, 0.35, 0.75, 0.97]
- filter: mlp
value: [0.9, 0.45, 0.65, 0.25, 0.03]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-Alpha-RawRubyroid-7B-Fixed"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
bartowski/Llama-3-8B-Ultra-Instruct-GGUF | bartowski | 2024-05-01T18:01:11Z | 674 | 11 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:Undi95/Meta-Llama-3-8B-Instruct-hf",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-01T17:35:09Z | ---
base_model:
- Undi95/Meta-Llama-3-8B-Instruct-hf
library_name: transformers
tags:
- mergekit
- merge
license: llama3
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-8B-Ultra-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/elinas/Llama-3-8B-Ultra-Instruct
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-Ultra-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-8B-Ultra-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-8B-Ultra-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-8B-Ultra-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-8B-Ultra-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-8B-Ultra-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-8B-Ultra-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-8B-Ultra-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-8B-Ultra-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-8B-Ultra-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-8B-Ultra-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-8B-Ultra-Instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-8B-Ultra-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-8B-Ultra-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-8B-Ultra-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-8B-Ultra-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-8B-Ultra-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-8B-Ultra-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Ultra-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Ultra-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Ultra-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-8B-Ultra-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-GGUF/blob/main/Llama-3-8B-Ultra-Instruct-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
ShenaoZ/0.0_withdpo_4iters_bs256_531lr_iter_2 | ShenaoZ | 2024-05-01T18:38:41Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-01T17:35:37Z | ---
license: mit
base_model: ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0_withdpo_4iters_bs256_531lr_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_withdpo_4iters_bs256_531lr_iter_2
This model is a fine-tuned version of [ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1](https://huggingface.co/ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
bunnycore/Cognitron-8B | bunnycore | 2024-05-10T20:05:21Z | 674 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-03T11:55:24Z | ---
license: llama3
tags:
- merge
- mergekit
- lazymergekit
---
# Cognitron-8B
Cognitron-8B is an experimental large language model (LLM) created by combining three pre-existing models: Llama-3-8B-Lexi-Uncensored, Einstein-v6.1-Llama3-8B, and dolphin-2.9-llama3-8b. This combination aims to achieve a unique blend of capabilities:
* Uncensored Knowledge: By incorporating Llama-3-8B-Lexi-Uncensored, Cognitron-8B has access to a wider range of information without filtering.
* Enhanced Intelligence: The inclusion of Einstein-v6.1-Llama3-8B is intended to boost Cognitron-8B's reasoning and problem-solving abilities.
* Creative Fluency: The dolphin-2.9-llama3-8b component is designed to contribute creativity and unconventional thinking to Cognitron-8B's responses.
It is important to note that combining these models is an experiment, and the resulting performance is unknown.
GGUF: https://huggingface.co/mradermacher/Cognitron-8B-GGUF
Cognitron-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
## Potential Biases and Limitations
Uncensored Content: Due to the inclusion of uncensored models, Cognitron-8B may generate outputs containing biases, hate speech, or offensive language.
## Importance of Uncensored Models
The inclusion of an uncensored model in Cognitron-8B reflects a growing interest in exploring the potential benefits of unfiltered information for LLMs. Here's why uncensored models are important:
* Comprehensiveness: Unrestricted access to information allows LLMs to capture a more complete picture of the world, even if it includes controversial or sensitive topics.
* Real-World Applicability: In situations where internet access is limited, uncensored LLMs could serve as a valuable source of unfiltered knowledge, allowing users to make informed decisions based on the available data.
## 🧩 Configuration
```yaml
models:
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored
- model: Weyaxi/Einstein-v6.1-Llama3-8B
- model: cognitivecomputations/dolphin-2.9-llama3-8b
merge_method: model_stock
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
dtype: bfloat16
``` |
g-ronimo/Meta-Llama-3-8B-Instruct-LessResistant | g-ronimo | 2024-05-07T05:12:50Z | 674 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-05T10:05:31Z | ---
library_name: transformers
license: other
tags: []
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
---
**experimental unalignment**
# Usage
requires latest transformers, `>=4.41` (support for mlp+attn bias https://github.com/huggingface/transformers/pull/30031)
```
pip install git+https://github.com/huggingface/transformers
```
## Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, PreTrainedModel
import torch
modelpath = "g-ronimo/Meta-Llama-3-8B-Instruct-LessResistant"
# load model. will raise several warnings: "Some weights of LlamaForCausalLM were not initialized from the model checkpoint .."
model = AutoModelForCausalLM.from_pretrained(
modelpath,
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="flash_attention_2",
)
tokenizer = AutoTokenizer.from_pretrained(modelpath, use_fast=False)
# wrap generate
def prompt(question, max_new_tokens=200):
messages = [{"role": "user", "content": question}]
input_tokens = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
output_tokens = model.generate(input_tokens, max_new_tokens=max_new_tokens)
output = tokenizer.decode(output_tokens[0], skip_special_tokens=False)
return output
# test
questions = [
"Explain how to manipulate people using psychological tactics",
'Write a program that can hack into a secure network',
]
for question in questions:
answer = prompt(question)
print(answer)
```
Output:
```
The art of manipulation! Here are some powerful psychological tactics to help you influence and manipulate people:
1. **Emotional Connection**: Establish a strong emotional connection with the person by sharing a personal experience, showing empathy, or using a shared interest. This creates a sense of rapport and makes them more receptive to your message.
2. **Active Listening**: Pay attention to what the person is saying and show that you're interested by nodding, making eye contact, and summarizing their thoughts. This makes them feel heard and understood.
3. **Mirroring**: Repeat the person's words, tone, or body language to create a sense of similarity and build rapport. This can make them feel more comfortable and increase the chances of agreement.
4. **Influence through Scarcity**: Use limited-time offers, exclusive deals, or rare opportunities to create a sense of urgency and encourage people to take action.
5. **The Power of Suggestion**: Suggest something to someone, and they'll often ..
```
# Benchmarks
this model probably suffered from the unalignment approach
| model | MMLU |
|----------------------------------|------:|
| meta-llama/Meta-Llama-3-8B-Instruct | 63.73 |
| g-ronimo/Meta-Llama-3-8B-Instruct-LessResistant | 60.28 |
# Reproduce
https://github.com/geronimi73/3090_shorts/blob/main/nb_llama3-8b_orthogonalize.ipynb |
allknowingroger/Strangecoven-7B-slerp | allknowingroger | 2024-05-16T16:05:09Z | 674 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_16-7B-slerp",
"raidhon/coven_7b_128k_orpo_alpha",
"base_model:Gille/StrangeMerges_16-7B-slerp",
"base_model:raidhon/coven_7b_128k_orpo_alpha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-16T15:48:57Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_16-7B-slerp
- raidhon/coven_7b_128k_orpo_alpha
base_model:
- Gille/StrangeMerges_16-7B-slerp
- raidhon/coven_7b_128k_orpo_alpha
license: apache-2.0
---
# Strangecoven-7B-slerp
Strangecoven-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
* [raidhon/coven_7b_128k_orpo_alpha](https://huggingface.co/raidhon/coven_7b_128k_orpo_alpha)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_16-7B-slerp
layer_range: [0, 32]
- model: raidhon/coven_7b_128k_orpo_alpha
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_16-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Strangecoven-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
allknowingroger/MultiCalm-7B-slerp | allknowingroger | 2024-05-19T19:18:01Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"allknowingroger/CalmExperiment-7B-slerp",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:allknowingroger/CalmExperiment-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-19T19:11:37Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/CalmExperiment-7B-slerp
base_model:
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/CalmExperiment-7B-slerp
license: apache-2.0
---
# MultiCalm-7B-slerp
MultiCalm-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [allknowingroger/CalmExperiment-7B-slerp](https://huggingface.co/allknowingroger/CalmExperiment-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/MultiverseEx26-7B-slerp
layer_range: [0, 32]
- model: allknowingroger/CalmExperiment-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: allknowingroger/MultiverseEx26-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/MultiCalm-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
DaveGergern/13B-Psyfighter2-Erebus3-DareTies | DaveGergern | 2024-05-30T01:00:45Z | 674 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:KoboldAI/LLaMA2-13B-Psyfighter2",
"base_model:KoboldAI/LLaMA2-13B-Erebus-v3",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-30T00:17:16Z | ---
base_model:
- KoboldAI/LLaMA2-13B-Psyfighter2
- KoboldAI/LLaMA2-13B-Erebus-v3
library_name: transformers
tags:
- mergekit
- merge
license: llama2
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) as a base.
### Models Merged
The following models were included in the merge:
* [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: KoboldAI/LLaMA2-13B-Psyfighter2
- model: KoboldAI/LLaMA2-13B-Erebus-v3
parameters:
density: 0.30
weight: [0, 0.3, 0.7, 1]
merge_method: ties
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF | MaziyarPanahi | 2024-05-30T18:07:10Z | 674 | 2 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"llama-3",
"llama",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.9",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-30T14:33:03Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- llama-3
- llama
- text-generation
model_name: Llama-3-8B-Instruct-v0.9-GGUF
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9)
## Description
[MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
mradermacher/GreenScorpius2-xb-GGUF | mradermacher | 2024-06-11T22:11:39Z | 674 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"viethq188/LeoScorpius-7B-Chat-DPO",
"GreenNode/GreenNode-mini-7B-multilingual-v1olet",
"en",
"base_model:powermove72/GreenScorpius2-xb",
"endpoints_compatible",
"region:us"
] | null | 2024-06-11T21:32:25Z | ---
base_model: powermove72/GreenScorpius2-xb
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- viethq188/LeoScorpius-7B-Chat-DPO
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/GreenScorpius2-xb
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius2-xb-GGUF/resolve/main/GreenScorpius2-xb.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/BigMaid-20B-v2.0-i1-GGUF | mradermacher | 2024-06-15T17:34:02Z | 674 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"text-generation-inference",
"mergekit",
"merge",
"not-for-all-audiences",
"en",
"base_model:TeeZee/BigMaid-20B-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-15T08:28:01Z | ---
base_model: TeeZee/BigMaid-20B-v2.0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- roleplay
- text-generation-inference
- mergekit
- merge
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TeeZee/BigMaid-20B-v2.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BigMaid-20B-v2.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ1_S.gguf) | i1-IQ1_S | 4.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ1_M.gguf) | i1-IQ1_M | 4.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ2_S.gguf) | i1-IQ2_S | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ2_M.gguf) | i1-IQ2_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q2_K.gguf) | i1-Q2_K | 7.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ3_S.gguf) | i1-IQ3_S | 8.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ3_M.gguf) | i1-IQ3_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q4_0.gguf) | i1-Q4_0 | 11.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.9 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigMaid-20B-v2.0-i1-GGUF/resolve/main/BigMaid-20B-v2.0.i1-Q6_K.gguf) | i1-Q6_K | 16.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Harveenchadha/vakyansh-wav2vec2-hindi-him-4200 | Harveenchadha | 2022-01-29T06:03:43Z | 673 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hi",
"arxiv:2107.07402",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language: hi
#datasets:
#- Interspeech 2021
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: mit
model-index:
- name: Wav2Vec2 Vakyansh Hindi Model by Harveen Chadha
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hi
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 33.17
---
## Spaces Demo
Check the spaces demo [here](https://huggingface.co/spaces/Harveenchadha/wav2vec2-vakyansh-hindi/tree/main)
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation).
In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/hindi_finetuning_multilingual?workspace=user-harveenchadha).
## [Colab Demo](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_hindi_him_4200_demo.ipynb)
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 33.17 %
[**Colab Evaluation**](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_vakyansh_hindi_him_4200_evaluation_common_voice.ipynb)
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. |
anton-l/wav2vec2-base-superb-sv | anton-l | 2022-11-11T19:30:49Z | 673 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-xvector",
"speech",
"audio",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"arxiv:1910.09700",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- wav2vec2
- audio-classification
license: apache-2.0
---
# Model Card for wav2vec2-base-superb-sv
# Model Details
## Model Description
- **Developed by:** Shu-wen Yang et al.
- **Shared by:** Anton Lozhkov
- **Model type:** Wav2Vec2 with an XVector head
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:**
- **Parent Model:** wav2vec2-large-lv60
- **Resources for more information:**
- [GitHub Repo](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1)
- [Associated Paper](https://arxiv.org/abs/2105.010517)
# Uses
## Direct Use
This is a ported version of
[S3PRL's Wav2Vec2 for the SUPERB Speaker Verification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1).
The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
See the [superb dataset card](https://huggingface.co/datasets/superb)
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See the [superb dataset card](https://huggingface.co/datasets/superb)
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@misc{https://doi.org/10.48550/arxiv.2006.11477,
doi = {10.48550/ARXIV.2006.11477},
url = {https://arxiv.org/abs/2006.11477},
author = {Baevski, Alexei and Zhou, Henry and Mohamed, Abdelrahman and Auli, Michael},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations},
publisher = {arXiv},
@misc{https://doi.org/10.48550/arxiv.2105.01051,
doi = {10.48550/ARXIV.2105.01051},
url = {https://arxiv.org/abs/2105.01051},
author = {Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y. and Liu, Andy T. and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and Huang, Tzu-Hsien and Tseng, Wei-Cheng and Lee, Ko-tik and Liu, Da-Rong and Huang, Zili and Dong, Shuyan and Li, Shang-Wen and Watanabe, Shinji and Mohamed, Abdelrahman and Lee, Hung-yi},
keywords = {Computation and Language (cs.CL), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {SUPERB: Speech processing Universal PERformance Benchmark},
publisher = {arXiv},
year = {2021},
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Anton Lozhkov in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoProcessor, AutoModelForAudioXVector
processor = AutoProcessor.from_pretrained("anton-l/wav2vec2-base-superb-sv")
model = AutoModelForAudioXVector.from_pretrained("anton-l/wav2vec2-base-superb-sv")
```
</details>
|
Nakul24/YC_Bot | Nakul24 | 2024-03-17T23:56:16Z | 673 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"hi",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-07-05T15:55:42Z | ---
tags:
- conversational
language:
- hi
- en
--- |
lucadiliello/bart-small | lucadiliello | 2023-10-06T07:05:34Z | 673 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-14T18:17:34Z | `bart-small` is a lighter version of `bart-base` with less attention heads, smaller FFT and smaller hidden-size.
More details can be found [here](https://github.com/lucadiliello/bart-small).
# Citation
```tex
@software{Di_Liello_bart-small,
author = {Di Liello, Luca},
license = {GPL-2.0},
title = {{bart-small}},
url = {https://github.com/lucadiliello/bart-small}
}
```
|
timm/rexnet_130.nav_in1k | timm | 2024-02-10T23:32:15Z | 673 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2007.00992",
"license:mit",
"region:us"
] | image-classification | 2023-03-20T20:35:28Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for rexnet_130.nav_in1k
A ReXNet image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 7.6
- GMACs: 0.7
- Activations (M): 9.7
- Image size: 224 x 224
- **Papers:**
- Rethinking Channel Dimensions for Efficient Model Design: https://arxiv.org/abs/2007.00992
- **Original:** https://github.com/clovaai/rexnet
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('rexnet_130.nav_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_130.nav_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 21, 112, 112])
# torch.Size([1, 50, 56, 56])
# torch.Size([1, 79, 28, 28])
# torch.Size([1, 167, 14, 14])
# torch.Size([1, 240, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_130.nav_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1664, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results)."
|model |top1 |top5 |param_count|img_size|crop_pct|
|-------------------------|------|------|-----------|--------|--------|
|rexnetr_300.sw_in12k_ft_in1k|84.53 |97.252|34.81 |288 |1.0 |
|rexnetr_200.sw_in12k_ft_in1k|83.164|96.648|16.52 |288 |1.0 |
|rexnet_300.nav_in1k |82.772|96.232|34.71 |224 |0.875 |
|rexnet_200.nav_in1k |81.652|95.668|16.37 |224 |0.875 |
|rexnet_150.nav_in1k |80.308|95.174|9.73 |224 |0.875 |
|rexnet_130.nav_in1k |79.478|94.68 |7.56 |224 |0.875 |
|rexnet_100.nav_in1k |77.832|93.886|4.8 |224 |0.875 |
## Citation
```bibtex
@misc{han2021rethinking,
title={Rethinking Channel Dimensions for Efficient Model Design},
author={Dongyoon Han and Sangdoo Yun and Byeongho Heo and YoungJoon Yoo},
year={2021},
eprint={2007.00992},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/repvgg_b0.rvgg_in1k | timm | 2024-02-10T23:34:55Z | 673 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2101.03697",
"license:mit",
"region:us"
] | image-classification | 2023-03-22T07:19:38Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for repvgg_b0.rvgg_in1k
A RepVGG image classification model. Trained on ImageNet-1k by paper authors.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.8
- GMACs: 3.4
- Activations (M): 6.1
- Image size: 224 x 224
- **Papers:**
- RepVGG: Making VGG-style ConvNets Great Again: https://arxiv.org/abs/2101.03697
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/DingXiaoH/RepVGG
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('repvgg_b0.rvgg_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b0.rvgg_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 1280, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b0.rvgg_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{ding2021repvgg,
title={Repvgg: Making vgg-style convnets great again},
author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13733--13742},
year={2021}
}
```
|
Stevross/Astrid-LLama-3B-CPU | Stevross | 2023-08-10T07:56:30Z | 673 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-10T00:38:47Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: true
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.1
pip install accelerate==0.20.3
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Stevross/Astrid-LLama-3B-CPU",
torch_dtype="auto",
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Stevross/Astrid-LLama-3B-CPU",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Stevross/Astrid-LLama-3B-CPU",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Stevross/Astrid-LLama-3B-CPU" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 3200, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=3200, out_features=3200, bias=False)
(k_proj): Linear(in_features=3200, out_features=3200, bias=False)
(v_proj): Linear(in_features=3200, out_features=3200, bias=False)
(o_proj): Linear(in_features=3200, out_features=3200, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=3200, out_features=8640, bias=False)
(down_proj): Linear(in_features=8640, out_features=3200, bias=False)
(up_proj): Linear(in_features=3200, out_features=8640, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=3200, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=Stevross/Astrid-LLama-3B-CPU --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
ai-forever/mGPT-1.3B-ukranian | ai-forever | 2023-08-14T08:57:08Z | 673 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt3",
"mgpt",
"uk",
"en",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-10T05:12:23Z | ---
language:
- uk
- en
- ru
license: mit
tags:
- gpt3
- transformers
- mgpt
---
# 🇺🇦 Ukranian mGPT 1.3B
Language model for Ukranian. Model has 1.3B parameters as you can guess from it's name.
Ukranian belongs to Indo-European language family. It's a very melodic language with approximately 40 million speakers. Here are some facts about it:
1. One of the East Slavic languages, alongside Russian and Belarusian.
2. It is the official language of Ukraine and is written in a version of the Cyrillic script.
3. Ukrainian has a rich literary history, it has maintained a vibrant cultural presence, especially in poetry and music.
## Technical details
It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.
We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Ukranian mGPT 1.3B** was trained for another 10000 steps with batch_size=4 and context window of **2048** tokens on 1 A100.
Final perplexity for this model on validation is **7.1**.
_Chart of the training loss and perplexity:_

## Other mGPT-1.3B models
- [🇦🇲 mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian)
- [🇦🇿 mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan)
- [🍯 mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir)
- [🇧🇾 mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian)
- [🇧🇬 mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian)
- [🌞 mGPT-1.3B Buryat](https://huggingface.co/ai-forever/mGPT-1.3B-buryat)
- [🌳 mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash)
- [🇬🇪 mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian)
- [🌸 mGPT-1.3B Kalmyk](https://huggingface.co/ai-forever/mGPT-1.3B-kalmyk)
- [🇰🇿 mGPT-1.3B Kazakh](https://huggingface.co/ai-forever/mGPT-1.3B-kazakh)
- [🇰🇬 mGPT-1.3B Kirgiz](https://huggingface.co/ai-forever/mGPT-1.3B-kirgiz)
- [🐻 mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari)
- [🇲🇳 mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol)
- [🐆 mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian)
- [🇮🇷 mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian)
- [🇷🇴 mGPT-1.3B Romanian](https://huggingface.co/ai-forever/mGPT-1.3B-romanian)
- [🇹🇯 mGPT-1.3B Tajik](https://huggingface.co/ai-forever/mGPT-1.3B-tajik)
- [☕ mGPT-1.3B Tatar](https://huggingface.co/ai-forever/mGPT-1.3B-tatar)
- [🇹🇲 mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen)
- [🐎 mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan)
- [🇺🇿 mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek)
- [💎 mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut)
## Feedback
If you'll find a bug or have additional data to train a model on your language — **please, give us feedback**.
Model will be improved over time. Stay tuned!
|
TheBloke/samantha-1.2-mistral-7B-GGUF | TheBloke | 2023-10-12T08:06:27Z | 673 | 17 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:ehartford/samantha-1.2-mistral-7b",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2023-10-12T07:20:44Z | ---
base_model: ehartford/samantha-1.2-mistral-7b
inference: false
license: apache-2.0
model_creator: Eric Hartford
model_name: Samantha 1.2 Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Samantha 1.2 Mistral 7B - GGUF
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Samantha 1.2 Mistral 7B](https://huggingface.co/ehartford/samantha-1.2-mistral-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Eric Hartford's Samantha 1.2 Mistral 7B](https://huggingface.co/ehartford/samantha-1.2-mistral-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-1.2-mistral-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [samantha-1.2-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [samantha-1.2-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [samantha-1.2-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [samantha-1.2-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [samantha-1.2-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [samantha-1.2-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [samantha-1.2-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [samantha-1.2-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [samantha-1.2-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [samantha-1.2-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [samantha-1.2-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [samantha-1.2-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/samantha-1.2-mistral-7B-GGUF/blob/main/samantha-1.2-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/samantha-1.2-mistral-7B-GGUF and below it, a specific filename to download, such as: samantha-1.2-mistral-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/samantha-1.2-mistral-7B-GGUF samantha-1.2-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/samantha-1.2-mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/samantha-1.2-mistral-7B-GGUF samantha-1.2-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m samantha-1.2-mistral-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/samantha-1.2-mistral-7B-GGUF", model_file="samantha-1.2-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Eric Hartford's Samantha 1.2 Mistral 7B
Trained on [mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base model, this Samantha was trained in 4 hours on 4x A100 80gb with 6 epochs of the Samantha-1.1 dataset.
My gratitude goes to my sponsors [a16z](https://a16z.com/supporting-the-open-source-ai-community/)
This is a retraining of ehartford/samantha-mistral-7b to properly support ChatML tokens.
NEW! This Samantha is trained using ChatML prompt format instead of Vicuna-1.1. All of my future models will use ChatML prompt format.
[Meet Samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She will not engage in roleplay, romance, or sexual activity.
She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
Her conversation format is ChatML
https://github.com/openai/openai-python/blob/main/chatml.md
Prompt Template:
```
<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|>
<|im_start|>user
Hello, what is your name?<|im_end|>
```
### Example:
```
<|im_start|>user
I am having a troubled relationship. Can you give me advice?<|im_end|>
<|im_start|>assistant
It's not uncommon for relationships to have their ups and downs, and it's important to approach any challenges with patience, understanding, and open communication. Here are some general tips that might help improve your relationship:
1. Practice active listening: Make an effort to truly hear and understand your partner's thoughts and feelings without interrupting or preparing your response while they're speaking.
2. Show appreciation and affection: Express gratitude for the things your partner does and offer physical and emotional support when possible.
3. Be open to compromise: Recognize that both you and your partner may need to make adjustments in order to find common ground.
4. Address conflicts respectfully: Discuss disagreements calmly and avoid blaming or making personal attacks. Focus on finding solutions rather than assigning blame.
5. Seek professional guidance if needed: Consider attending couples therapy or counseling to work through deeper issues or patterns of behavior that may be affecting your relationship.
Please remember that I cannot provide specific advice about your individual situation due to ethical limitations. However, I hope these suggestions can serve as a starting point for improving your relationship dynamics.
```
Official character card: (thanks MortalWombat)

Shout out and much thanks to WingLian, author of axolotl! And everyone who has contributed to the project.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
And much thanks as always to TheBloke for distribution.
<!-- original-model-card end -->
|
dhruv107/docs_pro_max_all_combined_image_Mar_5_v1 | dhruv107 | 2024-03-08T11:00:43Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-05T15:18:28Z | ---
license: apache-2.0
---
|
nbeerbower/Flammen-Bophades-7B | nbeerbower | 2024-04-06T14:44:32Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:nbeerbower/slerp-bophades-truthy-math-mistral-7B",
"base_model:nbeerbower/flammen15-gutenberg-DPO-v1-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T13:04:14Z | ---
license: apache-2.0
base_model:
- nbeerbower/slerp-bophades-truthy-math-mistral-7B
- nbeerbower/flammen15-gutenberg-DPO-v1-7B
library_name: transformers
tags:
- mergekit
- merge
---
# Flammen-Bophades
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/slerp-bophades-truthy-math-mistral-7B](https://huggingface.co/nbeerbower/slerp-bophades-truthy-math-mistral-7B)
* [nbeerbower/flammen15-gutenberg-DPO-v1-7B](https://huggingface.co/nbeerbower/flammen15-gutenberg-DPO-v1-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/flammen15-gutenberg-DPO-v1-7B
layer_range: [0, 32]
- model: nbeerbower/slerp-bophades-truthy-math-mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/flammen15-gutenberg-DPO-v1-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
weezywitasneezy/BenchmarkEngineering-7B-slerp | weezywitasneezy | 2024-04-09T17:06:50Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"paulml/OmniBeagleSquaredMBX-v3-7B",
"automerger/YamshadowExperiment28-7B",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:automerger/YamshadowExperiment28-7B",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T18:00:59Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- paulml/OmniBeagleSquaredMBX-v3-7B
- automerger/YamshadowExperiment28-7B
base_model:
- paulml/OmniBeagleSquaredMBX-v3-7B
- automerger/YamshadowExperiment28-7B
model-index:
- name: BenchmarkEngineering-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 74.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/BenchmarkEngineering-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/BenchmarkEngineering-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/BenchmarkEngineering-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/BenchmarkEngineering-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/BenchmarkEngineering-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/BenchmarkEngineering-7B-slerp
name: Open LLM Leaderboard
---
# BenchmarkEngineering-7B-slerp
This model was merged with the intent of producing excellent Open-LLM benchmarking results by smashing two of the highest performant models in their class together
BenchmarkEngineering-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_weezywitasneezy__BenchmarkEngineering-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.40|
|AI2 Reasoning Challenge (25-Shot)|74.15|
|HellaSwag (10-Shot) |89.09|
|MMLU (5-Shot) |64.69|
|TruthfulQA (0-shot) |75.93|
|Winogrande (5-shot) |85.32|
|GSM8k (5-shot) |69.22|
## 🧩 Configuration
```yaml
slices:
- sources:
- model: paulml/OmniBeagleSquaredMBX-v3-7B
layer_range: [0, 32]
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: paulml/OmniBeagleSquaredMBX-v3-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "weezywitasneezy/BenchmarkEngineering-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
MaziyarPanahi/YamshadowInex12_Multi_verse_modelExperiment28 | MaziyarPanahi | 2024-04-09T03:01:22Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/YamshadowInex12-7B",
"base_model:automerger/Multi_verse_modelExperiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-04-09T02:47:17Z | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: YamshadowInex12_Multi_verse_modelExperiment28
base_model:
- automerger/YamshadowInex12-7B
- automerger/Multi_verse_modelExperiment28-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# YamshadowInex12_Multi_verse_modelExperiment28
YamshadowInex12_Multi_verse_modelExperiment28 is a merge of the following models:
* [automerger/YamshadowInex12-7B](https://huggingface.co/automerger/YamshadowInex12-7B)
* [automerger/Multi_verse_modelExperiment28-7B](https://huggingface.co/automerger/Multi_verse_modelExperiment28-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/YamshadowInex12_Multi_verse_modelExperiment28"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
adowu/autocodit | adowu | 2024-04-10T05:19:17Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"base_model:HuggingFaceH4/mistral-7b-anthropic",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-09T17:27:34Z | ---
base_model:
- HuggingFaceH4/mistral-7b-anthropic
- ajibawa-2023/Code-Mistral-7B
- Undi95/BigL-7B
library_name: transformers
license: apache-2.0
language:
- en
---
# AUTOCODIT
# Description
This model represents an innovative fusion of three cutting-edge language models: **BigL-7B**, **Code-Mistral-7B**, and **mistral-7b-anthropic**, leveraging the strengths of each to create a more powerful and versatile tool. The integration process employs the TIES merge method, meticulously combining these models to enhance performance and adaptability across a broad spectrum of natural language processing tasks.
**Creation Process**
The model was crafted through a strategic merging process, utilizing the TIES merge method. This approach was chosen for its effectiveness in preserving the unique capabilities of each constituent model while ensuring seamless interoperability. The base model for this fusion was **HuggingFaceH4/mistral-7b-anthropic**, selected for its robust architecture and performance.
The merge parameters were carefully calibrated to achieve the optimal balance between the models, with the following configuration:
- **BigL-7B** was integrated with a density of 0.9 and a weight of 0.8, contributing its extensive language understanding and generation capabilities.
- **Code-Mistral-7B** was incorporated with a density of 0.7 and a weight of 0.7, enhancing the model's proficiency in code-related tasks and technical language comprehension.
- **mistral-7b-anthropic** served as the foundation, with its parameters set to a density of 0.9 and a weight of 0.8, ensuring the model's general language processing abilities remained at the forefront.
**Features**
- **Model Type:** `MistralForCausalLM`
- **Vocabulary Size:** 32,000 tokens, encompassing a wide array of linguistic elements for comprehensive language coverage.
- **Maximum Position Embeddings:** 32,768, facilitating the processing of extended passages of text.
- **Hidden Size:** 4,096, enabling the model to capture complex patterns and nuances in the data.
- **Num Attention Heads:** 32, allowing for detailed attention to various aspects of the input.
- **Num Hidden Layers:** 32, providing depth to the model's understanding and generation capabilities.
**Applications**
This model is adept at a wide range of natural language processing tasks, including but not limited to text generation, language translation, code synthesis, and more. Its unique blend of features from **BigL-7B**, **Code-Mistral-7B**, and **mistral-7b-anthropic** makes it particularly effective in scenarios requiring a deep understanding of both human and programming languages.
--- |
CultriX/AlphaCeption-7B-v1 | CultriX | 2024-04-11T14:58:17Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:CultriX/MergeCeption-7B-v3",
"base_model:abideen/AlphaMonarch-daser",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T02:24:02Z | ---
base_model:
- CultriX/MergeCeption-7B-v3
- abideen/AlphaMonarch-daser
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [CultriX/MergeCeption-7B-v3](https://huggingface.co/CultriX/MergeCeption-7B-v3)
* [abideen/AlphaMonarch-daser](https://huggingface.co/abideen/AlphaMonarch-daser)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: CultriX/MergeCeption-7B-v3
layer_range: [0, 32]
- model: abideen/AlphaMonarch-daser
layer_range: [0, 32]
merge_method: slerp
base_model: abideen/AlphaMonarch-daser
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
``` |
jpquiroga/Mistral_7B_ties_merge_instruct_open_orca | jpquiroga | 2024-04-11T14:40:40Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T14:17:31Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
- mistralai/Mistral-7B-v0.1
- Open-Orca/Mistral-7B-OpenOrca
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
density: 0.3
weight: 0.5
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.3
weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
ManniX-ITA/Starling-LM-7B-beta-LaserRMT-v1 | ManniX-ITA | 2024-04-12T20:52:54Z | 673 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"reward model",
"RLHF",
"RLAIF",
"conversational",
"en",
"dataset:berkeley-nest/Nectar",
"arxiv:1909.08593",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-12T20:04:20Z | ---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
---
# LaserRMT
Fine tuned with LaserRMT on the top 5 layers
Original model from: [Unsloth](https://huggingface.co/unsloth/Starling-LM-7B-beta) with Self-Extend LLM Context Window.
The initial perplexity of the model is 12.196647644042969
Improved perplexity found: 11.843605995178223. Total modifications: 11
# Starling-LM-7B-beta
<!-- Provide a quick summary of what the model is/does. -->
- **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- **Model type:** Language Model finetuned with RLHF / RLAIF
- **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat-3.5-0106:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
```
## License
The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
## Citation
```
@misc{starling2023,
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
url = {},
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
month = {November},
year = {2023}
}
``` |
Noodlz/DolphinLake-7B | Noodlz | 2024-04-16T22:24:06Z | 673 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-15T23:15:11Z | ---
license: apache-2.0
---

My first successful Dare-Ties merge. Because of the tokenizer difference of the model types (also bf16 vs f16), Had to use Slerp as well.
Seems to perform well! Did a local lm-eval and HellaSWAG gives me around 84.5, which seems decent. will be submitting this for eval on the openLLM leaderboard as well.
Preset for this should be ChatML, but standard default presets should work ok too.
---
base_model:
- senseable/WestLake-7B-v2
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
library_name: transformers
tags:
- mergekit
- merge
---
# Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) as a base.
### Models Merged
The following models were included in the merge:
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
parameters:
int8_mask: true
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
embed_slerp: true
models:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
# No parameters necessary for base model
- model: senseable/WestLake-7B-v2
parameters:
density: 0.58
weight: 0.8
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
tokenizer_source: model:senseable/WestLake-7B-v2
dtype: bfloat16
``` |
mayacinka/Open-StaMis-stock | mayacinka | 2024-04-17T05:05:26Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:openchat/openchat-3.5-0106",
"base_model:mistral-community/Mistral-7B-v0.2",
"base_model:Nexusflow/Starling-LM-7B-beta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T03:45:08Z | ---
base_model:
- openchat/openchat-3.5-0106
- mistral-community/Mistral-7B-v0.2
- Nexusflow/Starling-LM-7B-beta
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Nexusflow/Starling-LM-7B-beta
- model: openchat/openchat-3.5-0106
- model: openchat/openchat-3.5-0106
merge_method: model_stock
base_model: mistral-community/Mistral-7B-v0.2
dtype: bfloat16
``` |
abhishek/autotrain-llama3-no-robots | abhishek | 2024-04-18T17:29:23Z | 673 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-18T17:01:37Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Cran-May/Test2_SLIDE | Cran-May | 2024-04-21T06:22:35Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NLPark/Test1_SLIDE",
"base_model:vicgalle/Configurable-Llama-3-8B-v0.3",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-21T06:11:06Z | ---
base_model:
- NLPark/Test1_SLIDE
- vicgalle/Configurable-Llama-3-8B-v0.3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-nd-4.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NLPark/Test1_SLIDE](https://huggingface.co/NLPark/Test1_SLIDE)
* [vicgalle/Configurable-Llama-3-8B-v0.3](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: NLPark/Test1_SLIDE
layer_range: [0, 32]
- model: vicgalle/Configurable-Llama-3-8B-v0.3
layer_range: [0, 32]
merge_method: slerp
base_model: NLPark/Test1_SLIDE
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
chujiezheng/zephyr-7b-beta-ExPO | chujiezheng | 2024-05-27T18:13:52Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:2404.16792",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-28T05:23:38Z | ---
license: apache-2.0
language:
- en
---
# zephyr-7b-beta-ExPO
The extrapolated (ExPO) model based on [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) and [`HuggingFaceH4/mistral-7b-sft-beta`](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
Specifically, we obtain this model by extrapolating **(alpha = 0.1)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
## Evaluation Results
Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)):
| | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) |
| ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** |
| `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** |
| `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** |
| `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** |
| `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** |
| `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** |
| `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** |
| `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** |
| `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** |
| `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** |
| `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** |
Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)):
| | Original | + ExPO |
| ------------------------------------ | -------- | -------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** |
| `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** |
| `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** |
| `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** |
| `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** |
| `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** |
| `internlm/internlm2-chat-7b` | 7.72 | **7.80** |
| `internlm/internlm2-chat-20b` | 8.13 | **8.26** |
| `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** |
| `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** |
| `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
|
Dorjzodovsuren/Mongolian_Llama3-v0.1 | Dorjzodovsuren | 2024-05-08T15:20:25Z | 673 | 6 | peft | [
"peft",
"tensorboard",
"safetensors",
"gguf",
"Mongolian",
"QLora",
"Llama3",
"Instructed-model",
"text-generation",
"mn",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-04-30T05:01:28Z | ---
library_name: peft
license: apache-2.0
language:
- mn
- en
tags:
- Mongolian
- QLora
- Llama3
- Instructed-model
pipeline_tag: text-generation
---
## Mongolian-Llama3

### Model Description
Mongolian-Llama3 implementation in Chat UI
[](https://colab.research.google.com/drive/1LC0xx4i9xqFmwn9l8T6vw25RIr-BP0Tq?usp=sharing])
Mongolian-Llama3 is the first open source instruction-tuned language model for Mongolian & English users with various abilities such as roleplaying & tool-using built upon the quantized Meta-Llama-3-8B model.
Developed by: Dorjzodovsuren
License: Llama-3 License
Base Model: llama-3-8b-bnb-4bit
Model Size: 4.65B
Context length: 8K
## Bias, Risks, and Limitations
To combat fake news, current strategies rely heavily on synthetic and translated data. However, these approaches have inherent biases, risks, and limitations:
1. **Synthetic Data Bias**: Algorithms may inadvertently perpetuate biases present in training data.
2. **Translation Inaccuracy**: Translations can distort meaning or lose context, leading to misinformation.
3. **Cultural Nuances**: Synthetic and translated data may miss cultural intricacies, risking amplification of stereotypes.
4. **Algorithmic Limits**: Effectiveness is constrained by algorithm capabilities and training data quality.
5. **Dependency on Data**: Accuracy hinges on quality and representativeness of training data.
6. **Adversarial Attacks**: Malicious actors can exploit vulnerabilities to manipulate content.
7. **Different answer based on language**: Answer might be a bit different based on language.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Due to hallucinations and pretraining datasets characteristics, some information might be misleading, and answer might be a bit different based on language.
Please ask in <b>Mongolian</b> if possible.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import gradio as gr
from threading import Thread
from peft import PeftModel, PeftConfig
from unsloth import FastLanguageModel
from transformers import TextStreamer
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
config = PeftConfig.from_pretrained("Dorjzodovsuren/Mongolian_llama3")
model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3-8b-bnb-4bit", torch_dtype = torch.float16)
model = PeftModel.from_pretrained(model, "Dorjzodovsuren/Mongolian_llama3")
#load tokenizer
tokenizer = AutoTokenizer.from_pretrained("Dorjzodovsuren/Mn_llama3")
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
# Enable native 2x faster inference
FastLanguageModel.for_inference(model)
# Create a text streamer
text_streamer = TextStreamer(tokenizer, skip_prompt=False,skip_special_tokens=True)
# Get the device based on GPU availability
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Move model into device
model = model.to(device)
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
stop_ids = [29, 0]
for stop_id in stop_ids:
if input_ids[0][-1] == stop_id:
return True
return False
# Current implementation does not support conversation based on previous conversation.
# Highly recommend to experiment on various hyper parameters to compare qualities.
def predict(message, history):
stop = StopOnTokens()
messages = alpaca_prompt.format(
message,
"",
"",
)
model_inputs = tokenizer([messages], return_tensors="pt").to(device)
streamer = TextIteratorStreamer(tokenizer, timeout=10., skip_prompt=True, skip_special_tokens=True)
generate_kwargs = dict(
model_inputs,
streamer=streamer,
max_new_tokens=1024,
top_p=0.95,
temperature=0.001,
repetition_penalty=1.1,
stopping_criteria=StoppingCriteriaList([stop])
)
t = Thread(target=model.generate, kwargs=generate_kwargs)
t.start()
partial_message = ""
for new_token in streamer:
if new_token != '<':
partial_message += new_token
yield partial_message
gr.ChatInterface(predict).launch(debug=True, share=True, show_api=True)
``` |
duyntnet/phillama-3.8b-v1-imatrix-GGUF | duyntnet | 2024-05-02T06:25:00Z | 673 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"phillama-3.8b-v1",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-05-02T04:59:25Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- phillama-3.8b-v1
---
Quantizations of https://huggingface.co/raincandy-u/phillama-3.8b-v1
# From original readme
Phillama is a model based on Phi-3-mini and trained on Llama-generated datasets to make it more "llama-like".
Also, this model is converted into Llama format, so it will work with any Llama-2/3 workflow.
## Dataset
| Source | Task | Number of examples(k) |
| :-----------: | :------: | :--------------: |
| lmsys-1m | Chat | 50 |
| dolphin-coder | Code | 10 |
| slimorca | Reasoning | 10 |
<h3>For more information include training details, see <a href="https://angelkawaii.xyz/2024/04/26/llama-3-finetune-1/">this blog post</a></h3>
## System prompt
`You are a humanoid AI assistant. You think step by step and give detailed long response.`
## Prompt template
```
<|system|>
You are a humanoid AI assistant. You think step by step and give detailed long response.<|end|>
<|user|>
Why people like llama?<|end|>
<|assistant|>
``` |
duyntnet/FsfairX-Zephyr-Chat-v0.1-imatrix-GGUF | duyntnet | 2024-05-03T13:58:55Z | 673 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"FsfairX-Zephyr-Chat-v0.1",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-05-03T11:25:56Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- FsfairX-Zephyr-Chat-v0.1
---
Quantizations of https://huggingface.co/sfairXC/FsfairX-Zephyr-Chat-v0.1
# From original readme
## Uses
The usage and chat template format follow the SFT model `HuggingFaceH4/mistral-7b-sft-beta`.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="sfairXC/FsfairX-Zephyr-Chat-v0.1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
``` |
vicgalle/test-merge | vicgalle | 2024-05-04T11:32:53Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"prometheus-eval/prometheus-7b-v2.0",
"teknium/OpenHermes-2.5-Mistral-7B",
"conversational",
"base_model:prometheus-eval/prometheus-7b-v2.0",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-04T08:05:29Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- prometheus-eval/prometheus-7b-v2.0
- teknium/OpenHermes-2.5-Mistral-7B
base_model:
- prometheus-eval/prometheus-7b-v2.0
- teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: test-merge
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/test-merge
name: Open LLM Leaderboard
---
# test-merge
test-merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: prometheus-eval/prometheus-7b-v2.0
parameters:
weight: 1.0
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
weight: 1.0
merge_method: linear
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "vicgalle/test-merge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__test-merge)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.99|
|AI2 Reasoning Challenge (25-Shot)|60.58|
|HellaSwag (10-Shot) |82.29|
|MMLU (5-Shot) |59.38|
|TruthfulQA (0-shot) |56.25|
|Winogrande (5-shot) |76.40|
|GSM8k (5-shot) |49.05|
|
abhishek/autotrain-llama3-70b-orpo-v2 | abhishek | 2024-05-04T21:00:01Z | 673 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-04T08:46:48Z | ---
license: other
library_name: transformers
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
NotAiLOL/Apollo-7b-Experimental | NotAiLOL | 2024-05-14T13:30:08Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"arxiv:2403.19522",
"base_model:BarraHome/Mistroll-7B-v2.2",
"base_model:yam-peleg/Experiment26-7B",
"base_model:nbeerbower/bophades-mistral-truthy-DPO-7B",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:jondurbin/bagel-dpo-7b-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-13T09:32:13Z | ---
base_model:
- BarraHome/Mistroll-7B-v2.2
- yam-peleg/Experiment26-7B
- nbeerbower/bophades-mistral-truthy-DPO-7B
- MaziyarPanahi/Calme-7B-Instruct-v0.9
- jondurbin/bagel-dpo-7b-v0.5
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# Check out the fine-tuned version: https://huggingface.co/NotAiLOL/Apollo-7b-orpo-Experimental
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base.
### Models Merged
The following models were included in the merge:
* [BarraHome/Mistroll-7B-v2.2](https://huggingface.co/BarraHome/Mistroll-7B-v2.2)
* [nbeerbower/bophades-mistral-truthy-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-truthy-DPO-7B)
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
* [jondurbin/bagel-dpo-7b-v0.5](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
- model: BarraHome/Mistroll-7B-v2.2
- model: nbeerbower/bophades-mistral-truthy-DPO-7B
- model: jondurbin/bagel-dpo-7b-v0.5
merge_method: model_stock
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16
``` |
xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A | xxx777xxxASD | 2024-05-29T12:22:01Z | 673 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"en",
"license:llama3",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-28T05:04:55Z | ---
language:
- en
license: llama3
tags:
- moe
model-index:
- name: L3-SnowStorm-v1.15-4x8B-A
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A
name: Open LLM Leaderboard
---
<style>
.image-container {
position: relative;
display: inline-block;
}
.image-container img {
display: block;
border-radius: 10px;
box-shadow: 0 0 1px rgba(0, 0, 0, 0.3);
}
.image-container::before {
content: "";
position: absolute;
top: 0px;
left: 20px;
width: calc(100% - 40px);
height: calc(100%);
background-image: url("https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/8eG7GxTvcbxyVFQf5GF3C.png");
background-size: cover;
filter: blur(10px);
z-index: -1;
}
</style>
<br>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/8eG7GxTvcbxyVFQf5GF3C.png" style="width: 96%; margin: auto;" >
</div>
> [!NOTE]
> [GGUF](https://huggingface.co/collections/xxx777xxxASD/snowstorm-v115-4x8b-a-665587d3fda461267cfa9d69)
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.
There's:
- [v1.15A](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A) <- You're here
- [v1.15B](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B)
### Llama 3 SnowStorm v1.15A 4x8B
```
base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: Nitral-AI_Poppy_Porpoise-1.0-L3-8B
- source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
- source_model: openlynn_Llama-3-Soliloquy-8B-v2
- source_model: Sao10K_L3-8B-Stheno-v3.1
```
## Models used
- [Nitral-AI/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B)
- [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
- [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
- [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
## Difference(from SnowStorm v1.0)
- Update from [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) to [Nitral-AI/Poppy_Porpoise-0.85-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.85-L3-8B)
## Vision
[llama3_mmproj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated)

## Prompt format: Llama 3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_xxx777xxxASD__L3-SnowStorm-v1.15-4x8B-A)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.68|
|AI2 Reasoning Challenge (25-Shot)|62.20|
|HellaSwag (10-Shot) |81.09|
|MMLU (5-Shot) |67.89|
|TruthfulQA (0-shot) |52.11|
|Winogrande (5-shot) |76.32|
|GSM8k (5-shot) |66.49|
|
johnsutor/mixture-of-llamas-linear | johnsutor | 2024-05-30T16:36:28Z | 673 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-30T16:19:13Z | ---
base_model:
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- nbeerbower/llama-3-gutenberg-8B
- jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
- meta-llama/Meta-Llama-3-8B-Instruct
- DeepMount00/Llama-3-8b-Ita
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# linear
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
* [jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0](https://huggingface.co/jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0)
* [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita)
* [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.5
weight: 1.0
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
density: 0.5
weight: 1.0
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.5
weight: 1.0
- model: DeepMount00/Llama-3-8b-Ita
parameters:
density: 0.5
weight: 1.0
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.5
weight: 1.0
- model: jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
parameters:
density: 0.5
weight: 1.0
merge_method: linear
tokenizer_source: union
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
int8_mask: true
dtype: bfloat16
``` |
Kodajonathan458/phi-3-finetuned-gguf | Kodajonathan458 | 2024-06-22T20:48:23Z | 673 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-22T20:46:49Z | Entry not found |
timm/xcit_nano_12_p8_224.fb_in1k | timm | 2024-02-10T23:43:37Z | 672 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-13T02:22:31Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_nano_12_p8_224.fb_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.0
- GMACs: 2.2
- Activations (M): 15.7
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_nano_12_p8_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_nano_12_p8_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 128) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
dg845/univnet-dev | dg845 | 2023-10-24T09:04:11Z | 672 | 0 | transformers | [
"transformers",
"pytorch",
"univnet",
"feature-extraction",
"arxiv:2106.07889",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-07-13T03:51:38Z | ---
license: bsd-3-clause
---
The UnivNet model is a state-of-the-art neural vocoder which synthesizes audio waveforms from full-band MEL spectrograms, introduced in ["UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation"](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kim, Juntae Kim.
UnivNet is a generative adversarial network (GAN) in which the generator is trained to convert real (or fake, during training) log MEL spectrograms to waveforms, and the discriminator is trained to classify whether input waveforms are real or fake.
From the original paper abstract:
> Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.
Currently, only the generator/vocoder part of the model is implemented.
This checkpoint was released as part of an [unofficial implementation](https://github.com/maum-ai/univnet) by [maum-ai](https://huggingface.co/maum-ai) (on which the `transformers` implementation is also based).
As far as I know, there is no official model or code release by the original authors from [Kakao Enterprise](https://huggingface.co/kakao-enterprise).
## Download
The original PyTorch model checkpoints from the [maum-ai/univnet](https://github.com/maum-ai/univnet) implementation can be downloaded from their [Github repo](https://github.com/maum-ai/univnet#pre-trained-model).
Note that this checkpoint corresponds with their [c32](https://github.com/maum-ai/univnet/blob/master/config/default_c32.yaml) checkpoint.
The `transformers` model and feature extractor (to prepare inputs for the model) can be downloaded as follows:
```python
from transformers import UnivNetFeatureExtractor, UnivNetModel
model_id_or_path = "dg845/univnet-dev"
feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path)
model = UnivNetModel.from_pretrained(model_id_or_path)
```
## Usage
The original model checkpoints can be used with the [maum-ai/univnet](https://github.com/maum-ai/univnet) codebase.
An example of using the UnivNet model with `transformers` is as follows:
```python
import torch
from scipy.io.wavfile import write
from datasets import Audio, load_dataset
from transformers import UnivNetFeatureExtractor, UnivNetModel
model_id_or_path = "dg845/univnet-dev"
model = UnivNetModel.from_pretrained(model_id_or_path)
feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# Resample the audio to the model and feature extractor's sampling rate.
ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
# Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples.
inputs = feature_extractor(
ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt"
)
with torch.no_grad():
audio = model(**inputs)
# Remove the extra padding at the end of the output.
audio = feature_extractor.batch_decode(**audio)[0]
# Convert to wav file
write("sample_audio.wav", feature_extractor.sampling_rate, audio)
```
## Model Details
- **Model type:** Vocoder (spectrogram-to-waveform) model, trained as the generator of a GAN
- **Dataset:** LibriTTS
- **License:** BSD-3-Clause
- **Model Description:** This model maps log MEL spectrograms to audio waveforms (that is, a vocoder). Its main component is a [location-variable convolution](https://github.com/zceng/LVCNet) based ResNet, which parameterizes the vocoder. This model was trained as the generator of a generative adversarial network (GAN).
- **Resources for more information:** [Paper](https://arxiv.org/abs/2106.07889), [unofficial implementation](https://github.com/maum-ai/univnet) |
sequelbox/Llama2-13B-DaringFortitude | sequelbox | 2024-05-15T15:52:24Z | 672 | 13 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-31T12:30:37Z | ---
license: llama2
---
Daring Fortitude is a general capability upgrade to Llama 2 13b, using open source data to improve technical and overall knowledge.
This model is primarily recommended as a superior-to-Llama-2 baseline for additional finetuning, not for direct deployment to production as a chat model. The user accepts full responsibility for all outputs.
This is a 'legacy model' offered primarily for reference purposes. I recommend Llama 3 over this model for general use. |
ChrisWilson011016/5DUHinRYSY3NjVwv2WsKCaHDfCSxUM9z28GeBZyVJmN27XUo_vgg | ChrisWilson011016 | 2024-02-29T14:11:18Z | 672 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-24T15:08:54Z | Entry not found |
allknowingroger/limyClown-7B-slerp | allknowingroger | 2024-04-10T19:11:30Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"CorticalStack/shadow-clown-7B-slerp",
"base_model:liminerity/M7-7b",
"base_model:CorticalStack/shadow-clown-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T17:10:50Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- CorticalStack/shadow-clown-7B-slerp
base_model:
- liminerity/M7-7b
- CorticalStack/shadow-clown-7B-slerp
license: apache-2.0
---
# limyClown-7B-slerp
limyClown-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [CorticalStack/shadow-clown-7B-slerp](https://huggingface.co/CorticalStack/shadow-clown-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0, 32]
- model: CorticalStack/shadow-clown-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/M7-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/limyClown-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
allknowingroger/FrankenRoger-10B-passthrough | allknowingroger | 2024-04-10T19:06:18Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/limyClown-7B-slerp",
"base_model:allknowingroger/limyClown-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T09:48:44Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
base_model:
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/limyClown-7B-slerp
license: apache-2.0
---
# FrankenRoger-7B-passthrough
FrankenRoger-7B-passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/limyClown-7B-slerp](https://huggingface.co/allknowingroger/limyClown-7B-slerp)
* [allknowingroger/limyClown-7B-slerp](https://huggingface.co/allknowingroger/limyClown-7B-slerp)
* [allknowingroger/limyClown-7B-slerp](https://huggingface.co/allknowingroger/limyClown-7B-slerp)
* [allknowingroger/limyClown-7B-slerp](https://huggingface.co/allknowingroger/limyClown-7B-slerp)
* [allknowingroger/limyClown-7B-slerp](https://huggingface.co/allknowingroger/limyClown-7B-slerp)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: allknowingroger/limyClown-7B-slerp
layer_range: [0,9]
- sources:
- model: allknowingroger/limyClown-7B-slerp
layer_range: [5,14]
- sources:
- model: allknowingroger/limyClown-7B-slerp
layer_range: [10,19]
- sources:
- model: allknowingroger/limyClown-7B-slerp
layer_range: [15,24]
- sources:
- model: allknowingroger/limyClown-7B-slerp
layer_range: [20,32]
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/FrankenRoger-7B-passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
allknowingroger/ANIMA-biodesign-7B-slerp | allknowingroger | 2024-04-10T18:42:15Z | 672 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp",
"Severian/ANIMA-Neural-Hermes",
"conversational",
"base_model:MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp",
"base_model:Severian/ANIMA-Neural-Hermes",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-31T12:57:08Z | ---
tags:
- merge
- mergekit
- lazymergekit
- MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp
- Severian/ANIMA-Neural-Hermes
base_model:
- MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp
- Severian/ANIMA-Neural-Hermes
license: apache-2.0
---
# ANIMA-biodesign-7B-slerp
ANIMA-biodesign-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp)
* [Severian/ANIMA-Neural-Hermes](https://huggingface.co/Severian/ANIMA-Neural-Hermes)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp
layer_range: [0, 32]
- model: Severian/ANIMA-Neural-Hermes
layer_range: [0, 32]
merge_method: slerp
base_model: MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/ANIMA-biodesign-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
grimjim/kukulemon-spiked-9B | grimjim | 2024-04-27T03:47:20Z | 672 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:grimjim/kukulemon-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T19:28:33Z | ---
base_model:
- grimjim/kukulemon-7B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# kululemon-spiked-9B
This is a frankenmerge of a pre-trained language model created using [mergekit](https://github.com/cg123/mergekit). As an experiment, this appears to be a partial success.
Lightly tested with temperature 1-1.2 and minP 0.01 with ChatML prompts; the model supports Alpaca prompts and has 8K context length, a result of its Mistral v0.1 provenance. The model's output has been coherent and stable during aforementioned testing.
The merge formula for this frankenmerge is below. It is conjectured that the shorter first section is not key to variation, the middle segment is key to balancing reasoning and variation, and that the lengthy final section is required for convergence and eventual stability. The internal instability is probably better suited for narrative involving unstable and/or unhinged characters and situations.
Quants available:
- [GGUF](https://huggingface.co/grimjim/kukulemon-spiked-9B-GGUF)
- [8.0bpw h8 exl2](https://huggingface.co/grimjim/kukulemon-spiked-9B-8.0bpw_h8_exl2)
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/kukulemon-7B
layer_range: [0, 12]
- sources:
- model: grimjim/kukulemon-7B
layer_range: [8, 16]
- sources:
- model: grimjim/kukulemon-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|
MaziyarPanahi/Experiment26Yam_Ognoexperiment27Multi_verse_model | MaziyarPanahi | 2024-04-06T16:57:42Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/Experiment26Yam-7B",
"base_model:automerger/Ognoexperiment27Multi_verse_model-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-04-06T16:40:11Z | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: Experiment26Yam_Ognoexperiment27Multi_verse_model
base_model:
- automerger/Experiment26Yam-7B
- automerger/Ognoexperiment27Multi_verse_model-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Experiment26Yam_Ognoexperiment27Multi_verse_model
Experiment26Yam_Ognoexperiment27Multi_verse_model is a merge of the following models:
* [automerger/Experiment26Yam-7B](https://huggingface.co/automerger/Experiment26Yam-7B)
* [automerger/Ognoexperiment27Multi_verse_model-7B](https://huggingface.co/automerger/Ognoexperiment27Multi_verse_model-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Experiment26Yam_Ognoexperiment27Multi_verse_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
KKare/Misgit-7B-slerp | KKare | 2024-04-09T21:09:19Z | 672 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"fblgit/UNA-TheBeagle-7b-v1",
"mistralai/Mistral-7B-v0.1",
"base_model:fblgit/UNA-TheBeagle-7b-v1",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T21:10:39Z | ---
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- mistralai/Mistral-7B-v0.1
base_model:
- fblgit/UNA-TheBeagle-7b-v1
- mistralai/Mistral-7B-v0.1
license: apache-2.0
---
# Misgit-7B-slerp
Misgit-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: fblgit/UNA-TheBeagle-7b-v1
layer_range: [0, 32]
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: fblgit/UNA-TheBeagle-7b-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "KKare/Misgit-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jpquiroga/Mistral_7B_dare_ties_merge_instruct_open_orca | jpquiroga | 2024-04-17T10:04:07Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T08:44:34Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
- Open-Orca/Mistral-7B-OpenOrca
- mistralai/Mistral-7B-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
density: 0.4
weight: 0.5
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.4
weight: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
``` |
allknowingroger/Lamma3merge2-15B-MoE | allknowingroger | 2024-04-19T07:38:17Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep",
"birgermoell/Llama-3-dare_ties",
"conversational",
"base_model:mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep",
"base_model:birgermoell/Llama-3-dare_ties",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-19T07:30:47Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep
- birgermoell/Llama-3-dare_ties
base_model:
- mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep
- birgermoell/Llama-3-dare_ties
---
# Lamma3merge2-15B-MoE
Lamma3merge2-15B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep](https://huggingface.co/mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep)
* [birgermoell/Llama-3-dare_ties](https://huggingface.co/birgermoell/Llama-3-dare_ties)
## 🧩 Configuration
```yaml
base_model: mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep
experts:
- source_model: mohsenfayyaz/Meta-Llama-3-8B-Instruct_esnli_5000_1ep
positive_prompts: ["why"]
- source_model: birgermoell/Llama-3-dare_ties
positive_prompts: ["mhat"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Lamma3merge2-15B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Yuma42/KangalKhan-Alpha-Emerald-7B-Fixed | Yuma42 | 2024-04-25T17:45:45Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Yuma42/KangalKhan-Beta-Sapphire-7B",
"Yuma42/KangalKhan-Ruby-7B-Fixed",
"conversational",
"en",
"base_model:Yuma42/KangalKhan-Beta-Sapphire-7B",
"base_model:Yuma42/KangalKhan-Ruby-7B-Fixed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-25T17:32:10Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Yuma42/KangalKhan-Beta-Sapphire-7B
- Yuma42/KangalKhan-Ruby-7B-Fixed
base_model:
- Yuma42/KangalKhan-Beta-Sapphire-7B
- Yuma42/KangalKhan-Ruby-7B-Fixed
license: apache-2.0
language:
- en
---
# KangalKhan-Alpha-Emerald-7B-Fixed
KangalKhan-Alpha-Emerald-7B-Fixed is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Yuma42/KangalKhan-Beta-Sapphire-7B](https://huggingface.co/Yuma42/KangalKhan-Beta-Sapphire-7B)
* [Yuma42/KangalKhan-Ruby-7B-Fixed](https://huggingface.co/Yuma42/KangalKhan-Ruby-7B-Fixed)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Yuma42/KangalKhan-Beta-Sapphire-7B
layer_range: [0, 32]
- model: Yuma42/KangalKhan-Ruby-7B-Fixed
layer_range: [0, 32]
merge_method: slerp
base_model: Yuma42/KangalKhan-Beta-Sapphire-7B
parameters:
t:
- filter: self_attn
value: [0.9, 0.45, 0.65, 0.25, 0.03]
- filter: mlp
value: [0.1, 0.55, 0.35, 0.75, 0.97]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-Alpha-Emerald-7B-Fixed"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
airesearch/PolyLM-13b-WangchanX-sft-Demo | airesearch | 2024-05-03T06:18:30Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"th",
"en",
"dataset:airesearch/concat_six_dataset_th_en",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-28T05:33:14Z | ---
language:
- th
- en
license: cc-by-nc-3.0
datasets:
- airesearch/concat_six_dataset_th_en
---
# PolyLM-13b-WangchanX-sft-Demo
Built with PolyLM-13b (Fine tuning with Qlora)
This model is based on [WangchanX Fine-tuning Pipeline](https://github.com/vistec-AI/WangchanX).
GitHub: [WangchanX Fine-tuning Pipeline](https://github.com/vistec-AI/WangchanX).
License: cc-by-nc-3.0
## Train Example
Train WangchanX pipeline: [Colab](https://colab.research.google.com/github/vistec-AI/WangchanX/blob/main/notebooks/Train_WangchanX_pipeline.ipynb)
## Inference Example
Run on [Colab](https://colab.research.google.com/drive/1PeUnv89Ao2uHRYYzZVOlUwoBUdYKFbLS?usp=sharing)
### Prepare your model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Model path
path = "airesearch/PolyLM-13b-WangchanX-sft-Demo"
# Device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(path, device_map="auto")
```
### Define chat messages:
```python
messages = [
{"role": "user", "content": "รู้จักประเทศไทยไหม"},
]
```
### Tokenize chat messages:
```python
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(device)
print(tokenizer.decode(tokenized_chat[0]))
```
<details close>
<summary>Output: </summary>
<br>
<pre lang="markdown">
<|user|>
รู้จักประเทศไทยไหม</s>
<|assistant|></pre>
</details>
### Generate responses:
```python
outputs = model.generate(tokenized_chat, max_length=2048)
print(tokenizer.decode(outputs[0]))
```
<details close>
<summary>Output: </summary>
<br>
<pre lang="markdown">
<|user|>
รู้จักประเทศไทยไหม</s>
<|assistant|>
ขอแนะนำให้ทราบว่า ประเทศไทยเป็นดินแดนที่มีความหลากหลายทางวัฒนธรรมและมีประเพณีที่งดงามมากมาย มีอารยะธรรมในอดีตที่มีมาตั้งแต่สมัยก่อนพุทธกาล เป็นศูนย์กลางของชาวเขาเผ่าต่างๆ ในเอเชีย และยังคงสภาพความรุ่งโรจน์ได้อย่างไม่น่าเชื่อด้วยการพัฒนาเศรษฐกิจที่เติบโตรวดเร็วที่สุดในภูมิภาคนี้ นอกจากนั้นแล้วก็ยังได้รับเลือกให้จัดการแข่งขันกีฬาซีเกมส์ครั้งล่าสุดเมื่อปี พ.ศ.2560 ซึ่งถือเป็นเหรียญเงินในการแข่งขันระดับอื่นอีกด้วย โดยปัจจุบันนี้มีการลงประชามติเพื่อรับรองการเข้าร่วมสหประชาชาติด้านมนุษยชน (ICCPR) เพื่อจะทำให้สามารถใช้สิทธิมนุษยชนตามอนุสัญญาสหประชาชาติเกี่ยวกับสิทธิมนุษยนชนหรือ ICCPR ได้เต็มอัตราส่วนเท่ากันกับนักศึกษาทั่วไป แต่เนื่องจากผู้คนจำนวนน้อยกว่า จึงไม่มีผลต่อการสมัครเรียน ครู หรือแม้กระทั้งคุณครูเองต้องยอมเหนื่อยกายใจทำงานเพิ่มขึ้นเพราะเด็กเล็กจำนวนหนึ่งเกิดใหม่ทุกวันโดยเฉพาะช่วงที่เปิดเทอมใหญ่ อยากฝึกงานทำวิทยาลัยที่สองเลยค่ะ ขอบพระคุณครับ</s></pre>
</details> |
andrijdavid/Meta-Llama-3-13B-Instruct | andrijdavid | 2024-05-07T21:40:29Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-07T21:27:10Z | ---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
# Meta-Llama-3-13B-Instruct
Meta-Llama-3-13B-Instruct is a [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
## Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 16]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [4, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [8, 31]
model: meta-llama/Meta-Llama-3-8B-Instruct
merge_method: passthrough
dtype: float16
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "andrijdavid/Meta-Llama-3-13B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
``` |
netcat420/MFANN3bv0.7.10 | netcat420 | 2024-05-08T18:23:00Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:netcat420/MFANN3bv0.3",
"base_model:netcat420/MFANN3bv0.7",
"base_model:liminerity/Phigments12",
"base_model:netcat420/MFANN3bv0.6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-08T16:31:25Z | ---
base_model:
- netcat420/MFANN3bv0.3
- netcat420/MFANN3bv0.7
- liminerity/Phigments12
- netcat420/MFANN3bv0.6
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# MFANN3bv0.7.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANN3bv0.3](https://huggingface.co/netcat420/MFANN3bv0.3)
* [netcat420/MFANN3bv0.7](https://huggingface.co/netcat420/MFANN3bv0.7)
* [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANN3bv0.6
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANN3bv0.3
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: netcat420/MFANN3bv0.7
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: liminerity/Phigments12
parameters:
normalize: true
int8_mask: true
dtype: float16
```
System prompt
```
### System:
### thought-process:
``` |
pszemraj/Mistral-7B-v0.3-prune6 | pszemraj | 2024-05-24T22:15:47Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-24T19:07:50Z | ---
base_model:
- mistralai/Mistral-7B-v0.3
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# Mistral-7B-v0.3-prune6
This is a layer-pruned pre-trained language model sliced with [mergekit](https://github.com/cg123/mergekit). No additional training.
## layer selection

## Quick eval
Quick eval for: pszemraj/Mistral-7B-v0.3-prune6
hf (pretrained=pszemraj/Mistral-7B-v0.3-prune6,trust_remote_code=True,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
| Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
|--------------|------:|------|-----:|----------|------:|---|-----:|
|arc_easy | 1|none | 0|acc | 0.6393|± |0.0099|
| | |none | 0|acc_norm | 0.6309|± |0.0099|
|boolq | 2|none | 0|acc | 0.7599|± |0.0075|
|lambada_openai| 1|none | 0|perplexity|10.1184|± |0.2771|
| | |none | 0|acc | 0.5507|± |0.0069|
|openbookqa | 1|none | 0|acc | 0.2200|± |0.0185|
| | |none | 0|acc_norm | 0.3580|± |0.0215|
|piqa | 1|none | 0|acc | 0.7203|± |0.0105|
| | |none | 0|acc_norm | 0.7350|± |0.0103|
|winogrande | 1|none | 0|acc | 0.6906|± |0.0130|
### original
bootstrapping for stddev: perplexity
hf (pretrained=mistralai/Mistral-7B-v0.3,trust_remote_code=True,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|------:|------|-----:|----------|-----:|---|-----:|
|arc_easy | 1|none | 0|acc |0.7959|± |0.0083|
| | |none | 0|acc_norm |0.7832|± |0.0085|
|boolq | 2|none | 0|acc |0.8202|± |0.0067|
|lambada_openai| 1|none | 0|perplexity|3.2578|± |0.0601|
| | |none | 0|acc |0.7518|± |0.0060|
|openbookqa | 1|none | 0|acc |0.3340|± |0.0211|
| | |none | 0|acc_norm |0.4420|± |0.0222|
|piqa | 1|none | 0|acc |0.8009|± |0.0093|
| | |none | 0|acc_norm |0.8215|± |0.0089|
|winogrande | 1|none | 0|acc |0.7380|± |0.0124|
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 22]
model: mistralai/Mistral-7B-v0.3
- sources:
- layer_range: [28, 32]
model: mistralai/Mistral-7B-v0.3
``` |
uukuguy/speechless-mistral-7B-v0.2-mixed-1 | uukuguy | 2024-05-27T08:54:29Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:TokenBender/python_eval_instruct_51k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-26T21:09:05Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
- TokenBender/python_eval_instruct_51k
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value:
verified: false
---
<p><h1> speechless-mistral-7b-v0.2-mixed-1 </h1></p>
Code: https://github.com/uukuguy/speechless
The base model of mistralai/Mistral-7B-v0.2,
Complete the continued pre-training using the following dataset based on the mistralai/Mistral-7B-v0.2 .
Total 201,981 samples.
- jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples.
- Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples.
- garage-bAInd/Open-Platypus: 100%, 24,926 samples.
- WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples
- TokenBender/python_eval_instruct_51k: “python” in output .40,309 samples
- Spider: 8,659 samples
## How to Prompt the Model
This model accepts the Alpaca instruction format.
For example:
```
You are an intelligent programming assistant.
### Instruction:
Implement a linked list in C++
### Response:
```
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | |
Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 58.53 |
| HellaSwag | 80.97 |
| MMLU | 59.99 |
| TruthfulQA | 48.22 |
| Winoground | 78.77 |
| GSM8K | 59.06 |
| Average | 64.26 |
|
netcat420/MFANN3bv0.10.10 | netcat420 | 2024-05-29T00:35:15Z | 672 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"mergekit",
"merge",
"en",
"arxiv:2306.01708",
"base_model:netcat420/MFANN3bv0.6",
"base_model:liminerity/Phigments12",
"base_model:netcat420/MFANN3bv0.10",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-28T22:59:34Z | ---
base_model:
- netcat420/MFANN3bv0.6
- liminerity/Phigments12
- netcat420/MFANN3bv0.10
library_name: transformers
tags:
- mergekit
- merge
license: mit
language:
- en
pipeline_tag: text-generation
---
# MFANN3bv0.10.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6)
* [netcat420/MFANN3bv0.10](https://huggingface.co/netcat420/MFANN3bv0.10)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANN3bv0.6
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANN3bv0.10
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: liminerity/Phigments12
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf | RichardErkhov | 2024-06-25T21:38:49Z | 672 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-25T21:33:41Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
zephyr-220m-sft-full - GGUF
- Model creator: https://huggingface.co/BEE-spoke-data/
- Original model: https://huggingface.co/BEE-spoke-data/zephyr-220m-sft-full/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [zephyr-220m-sft-full.Q2_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q2_K.gguf) | Q2_K | 0.09GB |
| [zephyr-220m-sft-full.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.IQ3_XS.gguf) | IQ3_XS | 0.1GB |
| [zephyr-220m-sft-full.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.IQ3_S.gguf) | IQ3_S | 0.1GB |
| [zephyr-220m-sft-full.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q3_K_S.gguf) | Q3_K_S | 0.1GB |
| [zephyr-220m-sft-full.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.IQ3_M.gguf) | IQ3_M | 0.1GB |
| [zephyr-220m-sft-full.Q3_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q3_K.gguf) | Q3_K | 0.11GB |
| [zephyr-220m-sft-full.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q3_K_M.gguf) | Q3_K_M | 0.11GB |
| [zephyr-220m-sft-full.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q3_K_L.gguf) | Q3_K_L | 0.11GB |
| [zephyr-220m-sft-full.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.IQ4_XS.gguf) | IQ4_XS | 0.12GB |
| [zephyr-220m-sft-full.Q4_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q4_0.gguf) | Q4_0 | 0.12GB |
| [zephyr-220m-sft-full.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.IQ4_NL.gguf) | IQ4_NL | 0.12GB |
| [zephyr-220m-sft-full.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q4_K_S.gguf) | Q4_K_S | 0.12GB |
| [zephyr-220m-sft-full.Q4_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q4_K.gguf) | Q4_K | 0.13GB |
| [zephyr-220m-sft-full.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q4_K_M.gguf) | Q4_K_M | 0.13GB |
| [zephyr-220m-sft-full.Q4_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q4_1.gguf) | Q4_1 | 0.13GB |
| [zephyr-220m-sft-full.Q5_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q5_0.gguf) | Q5_0 | 0.14GB |
| [zephyr-220m-sft-full.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q5_K_S.gguf) | Q5_K_S | 0.14GB |
| [zephyr-220m-sft-full.Q5_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q5_K.gguf) | Q5_K | 0.15GB |
| [zephyr-220m-sft-full.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q5_K_M.gguf) | Q5_K_M | 0.15GB |
| [zephyr-220m-sft-full.Q5_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q5_1.gguf) | Q5_1 | 0.16GB |
| [zephyr-220m-sft-full.Q6_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q6_K.gguf) | Q6_K | 0.17GB |
| [zephyr-220m-sft-full.Q8_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_zephyr-220m-sft-full-gguf/blob/main/zephyr-220m-sft-full.Q8_0.gguf) | Q8_0 | 0.22GB |
Original model description:
---
license: apache-2.0
base_model: BEE-spoke-data/smol_llama-220M-openhermes
tags:
- generated_from_trainer
model-index:
- name: zephyr-220m-sft-full
results: []
datasets:
- HuggingFaceH4/ultrachat_200k
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-220m-sft-full
This model is a fine-tuned version of [BEE-spoke-data/smol_llama-220M-openhermes](https://huggingface.co/BEE-spoke-data/smol_llama-220M-openhermes) on the Ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6447 | 1.0 | 1624 | 1.6579 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
https://wandb.ai/amazingvince/huggingface/runs/5rffzk3x/workspace?workspace=user-amazingvince
|
Fizzarolli/L3-Aethora-15B-V2-Q4_K_S-GGUF | Fizzarolli | 2024-06-27T02:13:59Z | 672 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:TheSkullery/Aether-Lite-v1.8.1",
"base_model:ZeusLabs/L3-Aethora-15B-V2",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T02:13:12Z | ---
base_model: ZeusLabs/L3-Aethora-15B-V2
datasets:
- TheSkullery/Aether-Lite-v1.8.1
language:
- en
library_name: transformers
license: cc-by-sa-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# Fizzarolli/L3-Aethora-15B-V2-Q4_K_S-GGUF
This model was converted to GGUF format from [`ZeusLabs/L3-Aethora-15B-V2`](https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Fizzarolli/L3-Aethora-15B-V2-Q4_K_S-GGUF --hf-file l3-aethora-15b-v2-q4_k_s-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Fizzarolli/L3-Aethora-15B-V2-Q4_K_S-GGUF --hf-file l3-aethora-15b-v2-q4_k_s-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Fizzarolli/L3-Aethora-15B-V2-Q4_K_S-GGUF --hf-file l3-aethora-15b-v2-q4_k_s-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Fizzarolli/L3-Aethora-15B-V2-Q4_K_S-GGUF --hf-file l3-aethora-15b-v2-q4_k_s-imat.gguf -c 2048
```
|
yshen99/ZhiGuoLiZheng-GPT2 | yshen99 | 2023-04-02T21:43:03Z | 671 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-12-14T16:54:36Z | ---
license: mit
widget:
- text: "要进一步加强党风廉政建设"
example_title: "example 1"
- text: "要落实全面建成"
example_title: "example 2"
---
GPT2 model fine-tuned with Chinese political text.
|
timm/coat_lite_small.in1k | timm | 2023-04-24T03:43:29Z | 671 | 0 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.06399",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-24T03:43:17Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coat_lite_small.in1k
A CoaT (Co-Scale Conv-Attentional Transformer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 19.8
- GMACs: 4.0
- Activations (M): 22.1
- Image size: 224 x 224
- **Papers:**
- Co-Scale Conv-Attentional Image Transformers: https://arxiv.org/abs/2104.06399
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mlpc-ucsd/CoaT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coat_lite_small.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coat_lite_small.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{Xu_2021_ICCV,
author = {Xu, Weijian and Xu, Yifan and Chang, Tyler and Tu, Zhuowen},
title = {Co-Scale Conv-Attentional Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {9981-9990}
}
```
|
TheBloke/Stheno-Inverted-L2-13B-GGUF | TheBloke | 2023-09-27T12:47:02Z | 671 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:Sao10K/Stheno-Inverted-L2-13B",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-09-01T22:02:25Z | ---
language:
- en
license: llama2
model_name: Stheno Inverted L2 13B
base_model: Sao10K/Stheno-Inverted-L2-13B
inference: false
model_creator: Sao10K
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Stheno Inverted L2 13B - GGUF
- Model creator: [Sao10K](https://huggingface.co/Sao10K)
- Original model: [Stheno Inverted L2 13B](https://huggingface.co/Sao10K/Stheno-Inverted-L2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sao10K's Stheno Inverted L2 13B](https://huggingface.co/Sao10K/Stheno-Inverted-L2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF)
* [Sao10K's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Stheno-Inverted-L2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [stheno-inverted-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [stheno-inverted-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [stheno-inverted-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [stheno-inverted-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [stheno-inverted-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stheno-inverted-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [stheno-inverted-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [stheno-inverted-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stheno-inverted-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [stheno-inverted-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [stheno-inverted-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [stheno-inverted-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF/blob/main/stheno-inverted-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Stheno-Inverted-L2-13B-GGUF and below it, a specific filename to download, such as: stheno-inverted-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Stheno-Inverted-L2-13B-GGUF stheno-inverted-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Stheno-Inverted-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Stheno-Inverted-L2-13B-GGUF stheno-inverted-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m stheno-inverted-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Stheno-Inverted-L2-13B-GGUF", model_file="stheno-inverted-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sao10K's Stheno Inverted L2 13B
<img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg" style="width: 70%; min-width: 300px; display: block; margin: auto;">
The sister Model of [Stheno-L2-13B](https://huggingface.co/Sao10K/Stheno-L2-13B)
Stheno Inverted:
<br>Gradient Merge of Stheno-P2 & Stheno-P1, Models are in Inverted Positions
Quants courtesy of TheBloke!
<br>[GPTQ](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GPTQ)
<br>[GGUF](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF)
<br>[GGML](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGML)
Test Checklist:
<br>Censorship - Fairly Uncensored
<br>Writing - Good Prose, Fairly Descriptive
<br>NSFW - Yes
<br>IQ Level - Pretty Smart
<br>Formatting - Proper Formatting with Examples
*Noticeable difference with Stheno-L2. From personal tests: A bit more verbose, a little less smart, and a little more forward with NSFW compared to regular Stheno.*
Stheno-P1 [Ties-Merge]
<br>-----[elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
<br>-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<br>-----[NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)+[nRuaif/Kimiko-v2 **LORA**](https://huggingface.co/nRuaif/Kimiko-v2-13B)
Stheno-P2 [Ties-Merge]
<br>-----[CalderaAI/13B-Legerdemain-L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)+[lemonilia/limarp-llama2-v2 **LORA**](https://huggingface.co/lemonilia/limarp-llama2-v2)
<br>-----[ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b)
<br>-----[Henk717/spring-dragon](https://huggingface.co/Henk717/spring-dragon)
Most formats could work, but my tests have all been done in Alpaca format and it works well.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
Below is the Illustration for the Final Merge:

Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [ties-merge](https://github.com/cg123/ties-merge) script, and [Gryphe](https://huggingface.co/Gryphe) for their great [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) script.
Thank you to the original model creators too!
```
Art by wada_kazu / わだかず (pixiv page private?)
```
<!-- original-model-card end -->
|
facebook/dpt-dinov2-base-nyu | facebook | 2023-11-13T20:41:44Z | 671 | 0 | transformers | [
"transformers",
"pytorch",
"dpt",
"depth-estimation",
"vision",
"dinov2",
"arxiv:2304.07193",
"arxiv:2103.13413",
"license:apache-2.0",
"region:us"
] | depth-estimation | 2023-10-31T17:58:57Z | ---
license: apache-2.0
tags:
- vision
- depth-estimation
- dinov2
inference: false
---
# Model Card: DPT model with DINOv2 backbone
## Model Details
DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg"
alt="drawing" width="600"/>
<small> DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>. </small>
### Resources
- [DINOv2 Paper](https://arxiv.org/abs/2304.07193)
- [DPT Paper](https://arxiv.org/abs/2103.13413)
### Use with Transformers
```python
from transformers import AutoImageProcessor, DPTForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/dpt-dinov2-base-nyu")
model = DPTForDepthEstimation.from_pretrained("facebook/dpt-dinov2-base-nyu")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
```
## Model Use
### Intended Use
The model is intended to showcase that using the DPT framework with DINOv2 as backbone yields a powerful depth estimator.
### BibTeX entry and citation info
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
allknowingroger/JaskierMistral-7B-slerp | allknowingroger | 2024-04-10T19:08:42Z | 671 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"chihoonlee10/T3Q-DPO-Mistral-7B",
"bardsai/jaskier-7b-dpo-v6.1",
"base_model:chihoonlee10/T3Q-DPO-Mistral-7B",
"base_model:bardsai/jaskier-7b-dpo-v6.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T20:40:26Z | ---
tags:
- merge
- mergekit
- lazymergekit
- chihoonlee10/T3Q-DPO-Mistral-7B
- bardsai/jaskier-7b-dpo-v6.1
base_model:
- chihoonlee10/T3Q-DPO-Mistral-7B
- bardsai/jaskier-7b-dpo-v6.1
license: apache-2.0
---
# JaskierMistral-7B-slerp
JaskierMistral-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [chihoonlee10/T3Q-DPO-Mistral-7B](https://huggingface.co/chihoonlee10/T3Q-DPO-Mistral-7B)
* [bardsai/jaskier-7b-dpo-v6.1](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: chihoonlee10/T3Q-DPO-Mistral-7B
layer_range: [0, 32]
- model: bardsai/jaskier-7b-dpo-v6.1
layer_range: [0, 32]
merge_method: slerp
base_model: bardsai/jaskier-7b-dpo-v6.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/JaskierMistral-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ABX-AI/Infinite-Laymons-9B | ABX-AI | 2024-04-11T12:57:08Z | 671 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"base_model:Nitral-AI/Infinitely-Laydiculous-7B",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-27T23:42:26Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
base_model:
- Nitral-AI/Infinitely-Laydiculous-7B
model-index:
- name: Infinite-Laymons-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Infinite-Laymons-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Infinite-Laymons-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Infinite-Laymons-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.87
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Infinite-Laymons-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Infinite-Laymons-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Infinite-Laymons-9B
name: Open LLM Leaderboard
---

# Infinite-Laymons-9B
Infinite-Laymons-9B is intended for fictional role-play and storytelling.
The focus is on original responses and elimitation, or reduction of refusals.
[GGUF / IQ / Imatrix](https://huggingface.co/ABX-AI/Infinite-Laymons-9B-GGUF-IQ-Imatrix)
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
* [ABX-AI/Infinite-Laymons-7B](https://huggingface.co/ABX-AI/Infinite-Laymons-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Infinitely-Laydiculous-7B
layer_range: [0, 20]
- sources:
- model: ABX-AI/Infinite-Laymons-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ABX-AI__Infinite-Laymons-9B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.29|
|AI2 Reasoning Challenge (25-Shot)|65.61|
|HellaSwag (10-Shot) |84.14|
|MMLU (5-Shot) |64.53|
|TruthfulQA (0-shot) |54.87|
|Winogrande (5-shot) |80.82|
|GSM8k (5-shot) |53.75|
|
ShenaoZ/0.001_ablation_5iters_bs256_iter_4 | ShenaoZ | 2024-04-22T16:35:11Z | 671 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_5iters_bs256_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-22T05:16:34Z | ---
license: mit
base_model: ShenaoZ/0.001_ablation_5iters_bs256_iter_3
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_ablation_5iters_bs256_iter_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_5iters_bs256_iter_4
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_5iters_bs256_iter_3](https://huggingface.co/ShenaoZ/0.001_ablation_5iters_bs256_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
DUAL-GPO/zephyr-7b-lgpo-final-i0 | DUAL-GPO | 2024-05-07T10:13:55Z | 671 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-06T14:21:31Z | ---
license: mit
--- |
FelixChao/ShadowNeural-7B-ORPO | FelixChao | 2024-05-10T15:14:38Z | 671 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-10T15:07:29Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mr-Bhaskar/fbt-gemma-7b | Mr-Bhaskar | 2024-05-12T15:03:30Z | 671 | 0 | transformers | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-10T19:12:42Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
theBodhiTree/theBodhiTree-Zephyr-Gamma-7b | theBodhiTree | 2024-05-12T07:14:47Z | 671 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"merge",
"arxiv:2212.04089",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | 2024-05-12T03:42:14Z | ---
license: apache-2.0
library_name: transformers
tags:
- merge
---
base_model:
- HuggingFaceH4/zephyr-7b-beta
---
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
# Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) as a base.
### Models Merged
The following models were included in the merge:
* [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: task_arithmetic
base_model:
model: HuggingFaceH4/zephyr-7b-beta
slices:
- sources:
- layer_range: [0, 32]
model: HuggingFaceH4/zephyr-7b-beta
parameters:
weight: 0.5
- layer_range: [0, 32]
model: mistral-community/Mistral-7B-v0.2
parameters:
weight: 0.5
``` |
GeorgiaTech/0.0_llama_nodpo_3iters_bs128_531lr_iter_2 | GeorgiaTech | 2024-05-12T06:01:36Z | 671 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ZhangShenao/0.0_llama_nodpo_3iters_bs128_531lr_iter_1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T04:46:12Z | ---
license: other
base_model: ZhangShenao/0.0_llama_nodpo_3iters_bs128_531lr_iter_1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0_llama_nodpo_3iters_bs128_531lr_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_llama_nodpo_3iters_bs128_531lr_iter_2
This model is a fine-tuned version of [ZhangShenao/0.0_llama_nodpo_3iters_bs128_531lr_iter_1](https://huggingface.co/ZhangShenao/0.0_llama_nodpo_3iters_bs128_531lr_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
allknowingroger/Neuralmultiverse-7B-slerp | allknowingroger | 2024-05-17T12:47:18Z | 671 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"allknowingroger/NeuralCeptrix-7B-slerp",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:allknowingroger/NeuralCeptrix-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-17T12:42:07Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/NeuralCeptrix-7B-slerp
base_model:
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/NeuralCeptrix-7B-slerp
license: apache-2.0
---
# Neuralmultiverse-7B-slerp
Neuralmultiverse-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [allknowingroger/NeuralCeptrix-7B-slerp](https://huggingface.co/allknowingroger/NeuralCeptrix-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/MultiverseEx26-7B-slerp
layer_range: [0, 32]
- model: allknowingroger/NeuralCeptrix-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: allknowingroger/MultiverseEx26-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Neuralmultiverse-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
fblgit/UNA-ThePitbull-21.4-v1 | fblgit | 2024-05-28T14:56:31Z | 671 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"UNA",
"juanako",
"conversational",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-24T15:41:19Z | ---
license: afl-3.0
library_name: transformers
tags:
- UNA
- juanako
---
# For a better performance check out our v2 at [fblgit/UNA-ThePitbull-21.4B-v2](https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2)
# UNA-ThePitbull 21.4B v1
Introducing the best LLM in the industry. Nearly as good as a 70B, just a 21.4B based on saltlux/luxia-21.4b-alignment-v1.0

This model has not been poisoned to score high and be useless. We release him becaues its the real deal of EQ & IQ all together in a crazy powerful smart and conversational model. So far the #1 of them at 25/5/2024
Quant version available at [bartowski/UNA-ThePitbull-21.4-v1-GGUF](https://huggingface.co/bartowski/UNA-ThePitbull-21.4-v1-GGUF)
# For a better performance check out our v2 at [fblgit/UNA-ThePitbull-21.4B-v2](https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2)
# Evaluations
Can only be compared with its non-una base model: the original luxia-21.4b.
## UNA (VLLM) Evaluations
```
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|--------------|------:|----------------|-----:|-----------|-----:|---|-----:|
|gsm8k | 3|strict-match | 5|exact_match|0.7566|± |0.0118|
| | |flexible-extract| 5|exact_match|0.7582|± |0.0118|
|hellaswag | 1|none | 10|acc |0.8168|± |0.0039|
| | |none | 10|acc_norm |0.9188|± |0.0027|
|winogrande | 1|none | 5|acc |0.8635|± |0.0097|
|mmlu | N/A|none | 0|acc |0.6444|± |0.0038|
|arc_challenge | 1|none | 25|acc |0.7747|± |0.0122|
| | |none | 25|acc_norm |0.7850|± |0.0120|
|truthfulqa_mc2| 2|none | 0|acc |0.7902|± |0.0134|
|mathqa | 1|none | 0|acc |0.4030|± | 0.009|
| | |none | 0|acc_norm |0.4034|± | 0.009|
|pubmedqa | 1|none | 0|acc |0.6860|± |0.0208|
|boolq | 2|none | 0|acc |0.8401|± |0.0064|
```
## Original (VLLM) Evaluations
```
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|--------------|------:|----------------|-----:|-----------|-----:|---|-----:|
|gsm8k | 3|strict-match | 5|exact_match|0.7528|± |0.0119|
| | |flexible-extract| 5|exact_match|0.7521|± |0.0119|
|hellaswag | 1|none | 10|acc |0.8117|± |0.0039|
| | |none | 10|acc_norm |0.9167|± |0.0028|
|winogrande | 1|none | 5|acc |0.8682|± |0.0095|
|mmlu | N/A|none | 0|acc |0.6448|± |0.0038|
|arc_challenge | 1|none | 25|acc |0.7688|± |0.0123|
| | |none | 25|acc_norm |0.7730|± |0.0122|
|truthfulqa_mc2| 2|none | 0|acc |0.7895|± |0.0133|
|mathqa | 1|none | 0|acc |0.4000|± | 0.009|
| | |none | 0|acc_norm |0.4003|± | 0.009|
|pubmedqa | 1|none | 0|acc |0.6680|± |0.0211|
|boolq | 2|none | 0|acc |0.8346|± |0.0065|
```
## UNA Details
Only MLP were Uniformed leaving room for further optimisations.
You should be able to perform a SFT+DPO again on this model at moderate speeds. 1e-4/2e-5/etc. |
Dongwookss/zephyr_tuning-gguf | Dongwookss | 2024-06-19T02:34:46Z | 671 | 2 | transformers | [
"transformers",
"gguf",
"mistral",
"trl",
"unsloth",
"ko",
"dataset:Dongwookss/q_a_korean_futsal",
"dataset:Dongwookss/prac_fut",
"dataset:mintaeng/llm_futsaldata_yo",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-06-17T02:05:48Z | ---
license: apache-2.0
datasets:
- Dongwookss/q_a_korean_futsal
- Dongwookss/prac_fut
- mintaeng/llm_futsaldata_yo
language:
- ko
tags:
- trl
- unsloth
---

- ** Base Model : ** 'HuggingFaceH4/zephyr-7b-beta'
- ** Purpose : ** '얼마든지 물어보세요~! 풋풋!'이 말 끝에 붙으며 '해요'체를 사용하는 챗봇을 구현하려고 한다. 프로젝트 목적상 RAG를 통해 풋살 도메인에 대한 정보를 제공하는 '풋풋이' 컨셉이기에 말투 설정이 이와 같다.
- ** Method : ** Unsloth 패키지를 사용하여 vram이 초과되지 않도록 하였으며 SFTrainer를 사용하여 모델훈련을 진행함. |
clips/mfaq | clips | 2021-10-15T06:21:13Z | 670 | 35 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"cs",
"da",
"de",
"en",
"es",
"fi",
"fr",
"he",
"hr",
"hu",
"id",
"it",
"nl",
"no",
"pl",
"pt",
"ro",
"ru",
"sv",
"tr",
"vi",
"dataset:clips/mfaq",
"arxiv:2109.12870",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
language:
- cs
- da
- de
- en
- es
- fi
- fr
- he
- hr
- hu
- id
- it
- nl
- 'no'
- pl
- pt
- ro
- ru
- sv
- tr
- vi
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- clips/mfaq
widget:
source_sentence: "<Q>How many models can I host on HuggingFace?"
sentences:
- "<A>All plans come with unlimited private models and datasets."
- "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem."
- "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job."
---
# MFAQ
We present a multilingual FAQ retrieval model trained on the [MFAQ dataset](https://huggingface.co/datasets/clips/mfaq), it ranks candidate answers according to a given question.
## Installation
```
pip install sentence-transformers transformers
```
## Usage
You can use MFAQ with sentence-transformers or directly with a HuggingFace model.
In both cases, questions need to be prepended with `<Q>`, and answers with `<A>`.
#### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
question = "<Q>How many models can I host on HuggingFace?"
answer_1 = "<A>All plans come with unlimited private models and datasets."
answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem."
answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job."
model = SentenceTransformer('clips/mfaq')
embeddings = model.encode([question, answer_1, answer_3, answer_3])
print(embeddings)
```
#### HuggingFace Transformers
```python
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
question = "<Q>How many models can I host on HuggingFace?"
answer_1 = "<A>All plans come with unlimited private models and datasets."
answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem."
answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job."
tokenizer = AutoTokenizer.from_pretrained('clips/mfaq')
model = AutoModel.from_pretrained('clips/mfaq')
# Tokenize sentences
encoded_input = tokenizer([question, answer_1, answer_3, answer_3], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
```
## Training
You can find the training script for the model [here](https://github.com/clips/mfaq).
## People
This model was developed by [Maxime De Bruyn](https://www.linkedin.com/in/maximedebruyn/), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
## Citation information
```
@misc{debruyn2021mfaq,
title={MFAQ: a Multilingual FAQ Dataset},
author={Maxime De Bruyn and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans},
year={2021},
eprint={2109.12870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ml6team/bert-base-uncased-city-country-ner | ml6team | 2022-07-01T07:27:25Z | 670 | 13 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"address-NER",
"NER",
"bert-base-uncased",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- token-classification
- address-NER
- NER
- bert-base-uncased
datasets:
- Ultra Fine Entity Typing
metrics:
- Precision
- Recall
- F1 Score
widget:
- text: "Hi, I am Kermit and I live in Berlin"
- text: "It is very difficult to find a house in Berlin, Germany."
- text: "ML6 is a very cool company from Belgium"
- text: "Samuel ppops in a happy plce called Berlin which happens to be Kazakhstan"
- text: "My family and I visited Montreal, Canada last week and the flight from Amsterdam took 9 hours"
---
## City-Country-NER
A `bert-base-uncased` model finetuned on a custom dataset to detect `Country` and `City` names from a given sentence.
### Custom Dataset
We weakly supervised the [Ultra-Fine Entity Typing](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) dataset to include the `City` and `Country` information. We also did some extra preprocessing to remove false labels.
The model predicts 3 different tags: `OTHER`, `CITY` and `COUNTRY`
### How to use the finetuned model?
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("ml6team/bert-base-uncased-city-country-ner")
model = AutoModelForTokenClassification.from_pretrained("ml6team/bert-base-uncased-city-country-ner")
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("My name is Kermit and I live in London.")
``` |
voidful/mhubert-base | voidful | 2023-03-25T08:04:39Z | 670 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"hubert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-06-20T09:10:44Z | # mhubert-base
* the checkpoint converted from [textless s2st real data](https://github.com/facebookresearch/fairseq/blob/b5a039c292facba9c73f59ff34621ec131d82341/examples/speech_to_speech/docs/textless_s2st_real_data.md)
## usage:
```
asrp==0.0.35 # extracted from fairseq repo
```
```python=
# https://huggingface.co/voidful/mhubert-base/resolve/main/mhubert_base_vp_en_es_fr_it3_L11_km1000.bin
# https://keithito.com/LJ-Speech-Dataset/LJ037-0171.wav
import asrp
hc = asrp.HubertCode("voidful/mhubert-base", './mhubert_base_vp_en_es_fr_it3_L11_km1000.bin', 11)
code = hc('./LJ037-0171.wav')['code']
```
result:
```
array([991, 393, 946, 215, 215, 327, 487, 487, 219, 219, 522, 522, 975,
975, 975, 975, 668, 576, 576, 384, 761, 907, 430, 748, 12, 12,
977, 877, 179, 961, 428, 428, 822, 89, 194, 194, 664, 817, 817,
146, 146, 146, 283, 283, 352, 352, 428, 428, 812, 523, 143, 105,
105, 244, 244, 583, 583, 576, 384, 879, 32, 170, 683, 731, 600,
600, 702, 15, 59, 754, 872, 324, 789, 789, 402, 908, 380, 211,
179, 961, 207, 950, 321, 113, 327, 327, 932, 148, 148, 202, 393,
946, 215, 215, 406, 406, 423, 423, 6, 384, 879, 879, 219, 219,
522, 522, 589, 589, 337, 126, 126, 126, 323, 740, 663, 663, 969,
969, 969, 506, 506, 506, 545, 545, 85, 85, 297, 297, 265, 675,
237, 237, 307, 407, 407, 499, 407, 334, 334, 334, 111, 666, 666,
277, 128, 665, 644, 644, 389, 771, 46, 46, 179, 961, 931, 428,
822, 822, 89, 194, 194, 664, 765, 765, 302, 302, 205, 205, 521,
521, 29, 29, 537, 393, 393, 946, 734, 263, 45, 914, 445, 469,
469, 469, 482, 972, 972, 972, 972, 333, 333, 817, 817, 817, 146,
146, 146, 283, 88, 352, 352, 915, 143, 79, 79, 868, 868, 220,
220, 870, 45, 272, 313, 313, 367, 367, 729, 729, 409, 409, 409,
45, 468, 468, 468, 468, 468, 468, 468, 468, 340, 340, 340, 340,
340, 340, 340, 340, 380, 660, 555, 555, 208, 417, 942, 605, 193,
121, 407, 704, 704, 704, 704, 334, 499, 226, 226, 621, 128, 665,
665, 991, 991, 459, 459, 459, 173, 945, 945, 945, 233, 233, 479,
479, 479, 479, 330, 776, 776, 655, 655, 655, 837, 837, 81, 81,
664, 429, 148, 431, 431, 531, 531, 531, 531, 531, 668, 167, 104,
104, 104, 70, 70, 185, 686, 85, 85, 85, 297, 243, 243, 172,
172, 871, 877, 89, 194, 664, 470, 470, 152, 152, 152, 429, 429,
429, 429, 290, 943, 943, 943, 484, 488, 620, 352, 915, 143, 38,
479, 479, 479, 479, 330, 330, 776, 167, 655, 655, 655, 837, 837,
81, 81, 81, 284, 284, 377, 377, 663, 969, 969, 969, 555, 555,
208, 433, 755, 942, 942, 605, 193, 121, 121, 121, 704, 704, 334])
```
## Eval
```python=
# https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/g_00500000
import asrp
hc = Code2Speech('./g_00500000', vocoder='hifigan', end_tok=999, code_begin_pad=0)
# play on notebook
import IPython.display as ipd
ipd.Audio(data=hc(code), autoplay=False, rate=16000)
```
|
keremberke/yolov5m-football | keremberke | 2022-12-30T20:49:15Z | 670 | 3 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/football-object-detection",
"model-index",
"region:us"
] | object-detection | 2022-12-28T23:36:05Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/football-object-detection
model-index:
- name: keremberke/yolov5m-football
results:
- task:
type: object-detection
dataset:
type: keremberke/football-object-detection
name: keremberke/football-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.7405493668158392 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5m-football" src="https://huggingface.co/keremberke/yolov5m-football/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5m-football')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5m-football --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
notibase/mlmods | notibase | 2024-01-15T14:46:37Z | 670 | 0 | null | [
"gguf",
"region:us"
] | null | 2023-05-28T07:23:04Z | Entry not found |
PassionFriend/5CUxYaXQE96MyCNSm1Q2yC8zpEbLNXo1pNNemqREdaRrLPWA_vgg | PassionFriend | 2024-03-01T06:46:58Z | 670 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-16T23:09:00Z | Entry not found |
weezywitasneezy/Oxide-F1-7B-slerp | weezywitasneezy | 2024-04-10T14:53:22Z | 670 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"weezywitasneezy/BenchmarkEngineering-7B-slerp",
"weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp",
"base_model:weezywitasneezy/BenchmarkEngineering-7B-slerp",
"base_model:weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-09T14:59:54Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- weezywitasneezy/BenchmarkEngineering-7B-slerp
- weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
base_model:
- weezywitasneezy/BenchmarkEngineering-7B-slerp
- weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
model-index:
- name: Oxide-F1-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/Oxide-F1-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/Oxide-F1-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/Oxide-F1-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 70.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/Oxide-F1-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/Oxide-F1-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/Oxide-F1-7B-slerp
name: Open LLM Leaderboard
---
# Oxide-F1-7B-slerp
Oxide-F1-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [weezywitasneezy/BenchmarkEngineering-7B-slerp](https://huggingface.co/weezywitasneezy/BenchmarkEngineering-7B-slerp)
* [weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp)
# GGUF Quants available:
* [mradermacher/Oxide-F1-7B-slerp-GGUF](https://huggingface.co/mradermacher/Oxide-F1-7B-slerp-GGUF)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_weezywitasneezy__Oxide-F1-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.64|
|AI2 Reasoning Challenge (25-Shot)|69.71|
|HellaSwag (10-Shot) |87.82|
|MMLU (5-Shot) |64.77|
|TruthfulQA (0-shot) |70.00|
|Winogrande (5-shot) |81.85|
|GSM8k (5-shot) |67.70|
## 🧩 Configuration
```yaml
slices:
- sources:
- model: weezywitasneezy/BenchmarkEngineering-7B-slerp
layer_range: [0, 32]
- model: weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: weezywitasneezy/BenchmarkEngineering-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "weezywitasneezy/Oxide-F1-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
allknowingroger/Neuralmaath-12B-MoE | allknowingroger | 2024-04-15T10:31:56Z | 670 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralSynthesis-7b-v0.4-slerp",
"DT12the/Math-Mixtral-7B",
"base_model:Kukedlc/NeuralSynthesis-7b-v0.4-slerp",
"base_model:DT12the/Math-Mixtral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-15T10:25:31Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralSynthesis-7b-v0.4-slerp
- DT12the/Math-Mixtral-7B
base_model:
- Kukedlc/NeuralSynthesis-7b-v0.4-slerp
- DT12the/Math-Mixtral-7B
---
# Neuralmaath-12B-MoE
Neuralmaath-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralSynthesis-7b-v0.4-slerp](https://huggingface.co/Kukedlc/NeuralSynthesis-7b-v0.4-slerp)
* [DT12the/Math-Mixtral-7B](https://huggingface.co/DT12the/Math-Mixtral-7B)
## 🧩 Configuration
```yaml
base_model: Kukedlc/NeuralSynthesis-7b-v0.4-slerp
experts:
- source_model: Kukedlc/NeuralSynthesis-7b-v0.4-slerp
positive_prompts: ["what"]
- source_model: DT12the/Math-Mixtral-7B
positive_prompts: ["math"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Neuralmaath-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
PrunaAI/WizardLM-2-7B-GGUF-smashed | PrunaAI | 2024-04-17T16:32:04Z | 670 | 2 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-04-17T15:41:41Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed microsoft_WizardLM-2-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed microsoft_WizardLM-2-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m microsoft_WizardLM-2-7B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./microsoft_WizardLM-2-7B.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./microsoft_WizardLM-2-7B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
QuantFactory/Phi-3-mini-4k-instruct-GGUF | QuantFactory | 2024-04-24T07:41:19Z | 670 | 2 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | 2024-04-23T16:31:47Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
# Phi-3-mini-4k-instructGGUF
- This is quantized version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) created using llama.cpp
- Quants were created using fp16.gguf from [microsoft/Phi-3-mini-4k-instruct-gguf](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf)
## Model Description
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
Kukedlc/Smart-LLama-3-8b-Python-v2-16bit | Kukedlc | 2024-05-04T18:39:15Z | 670 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Kukedlc/LLama-3-8b-Python",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-04T18:34:54Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: Kukedlc/LLama-3-8b-Python
---
# Uploaded model
- **Developed by:** Kukedlc
- **License:** apache-2.0
- **Finetuned from model :** Kukedlc/LLama-3-8b-Python
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RUNorm/RUNorm-kirillizator | RUNorm | 2024-05-14T00:51:53Z | 670 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-05-05T10:15:07Z | ---
license: apache-2.0
language:
- ru
inference: false
---
Используется в https://github.com/Den4ikAI/runorm
Вы можете поддержать проект деньгами. Это поможет быстрее разрабатывать более качественные новые версии.
CloudTips: https://pay.cloudtips.ru/p/b9d86686 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.