modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Alsebay/L3-test-2 | Alsebay | 2024-05-19T06:06:57Z | 684 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-19T05:45:21Z | ---
license: cc-by-nc-4.0
---
Well, nothing to much, test model with 2 epoch, old dataset.
around 130 row? or it ~150k row? I don't remember `(*>﹏<*)′
This is my first L3 test, with bigger Dataset novels, maybe it will not lead good model, I don't know, since OpenLLM LeaderBoard is freeze now.
2/4 series of L3. expect better than 1st model
|
crusoeai/dolphin-2.9.1-llama-3-70b-GGUF | crusoeai | 2024-05-23T05:56:52Z | 684 | 2 | null | [
"gguf",
"region:us"
] | null | 2024-05-22T01:05:55Z | Entry not found |
yrju/ultra_llm_merged | yrju | 2024-05-28T04:35:41Z | 684 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-28T02:48:37Z | ---
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
- WizardLM/WizardMath-7B-V1.1
- codellama/CodeLlama-7b-Instruct-hf
library_name: transformers
tags:
- mergekit
- merge
---
# ultra_llm_merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
- layer_range: [0, 32]
model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.5
weight:
- filter: mlp
value: 0.5
- value: 0.0
- layer_range: [0, 32]
model: codellama/CodeLlama-7b-Instruct-hf
parameters:
density: 0.5
weight: 0.5
```
|
mradermacher/Pantheon-10.7b-GGUF | mradermacher | 2024-06-07T06:11:32Z | 684 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Gryphe/Pantheon-10.7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T01:31:53Z | ---
base_model: Gryphe/Pantheon-10.7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Gryphe/Pantheon-10.7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pantheon-10.7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-10.7b-GGUF/resolve/main/Pantheon-10.7b.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF | Bakanayatsu | 2024-06-22T01:35:48Z | 684 | 0 | null | [
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"laser",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B-laser",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-06-22T01:35:30Z | ---
base_model: mlabonne/NeuralHermes-2.5-Mistral-7B-laser
datasets:
- mlabonne/chatml_dpo_pairs
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
- laser
- llama-cpp
- gguf-my-repo
model-index:
- name: NeuralHermes-2.5-Mistral-7B-laser
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.95
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser
name: Open LLM Leaderboard
---
# Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF
This model was converted to GGUF format from [`mlabonne/NeuralHermes-2.5-Mistral-7B-laser`](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -c 2048
```
|
CHE-72/Breeze-7B-Instruct-v1_0-Q3_K_L-GGUF | CHE-72 | 2024-06-22T18:12:14Z | 684 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-06-22T18:11:58Z | ---
base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0
language:
- zh
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# CHE-72/Breeze-7B-Instruct-v1_0-Q3_K_L-GGUF
This model was converted to GGUF format from [`MediaTek-Research/Breeze-7B-Instruct-v1_0`](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q3_K_L-GGUF --hf-file breeze-7b-instruct-v1_0-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q3_K_L-GGUF --hf-file breeze-7b-instruct-v1_0-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q3_K_L-GGUF --hf-file breeze-7b-instruct-v1_0-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q3_K_L-GGUF --hf-file breeze-7b-instruct-v1_0-q3_k_l.gguf -c 2048
```
|
KoboldAI/GPT-Neo-2.7B-Picard | KoboldAI | 2023-08-22T23:47:43Z | 683 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neo",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
license: mit
---
# GPT-Neo 2.7B - Picard
## Model Description
GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model.
## Training data
The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='mrseeker87/GPT-Neo-2.7B-Picard')
>>> generator("Jean-Luc Picard", do_sample=True, min_length=50)
[{'generated_text': 'Jean-Luc Picard, the captain of a Federation starship in command of one of Starfleet's few fulltime scientists.'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
``` |
deepset/electra-base-squad2 | deepset | 2023-09-27T12:04:02Z | 683 | 17 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/electra-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 77.6074
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzE5NTRmMmUwYTk1MTI0NjM0ZmQwNDFmM2Y4Mjk4ZWYxOGVmOWI3ZGFiNWM4OTUxZDQ2ZjdmNmU3OTk5ZjRjYyIsInZlcnNpb24iOjF9.0VZRewdiovE4z3K5box5R0oTT7etpmd0BX44FJBLRFfot-uJ915b-bceSv3luJQ7ENPjaYSa7o7jcHlDzn3oAw
- type: f1
value: 81.7181
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2VlMzM0Y2UzYjhhNTJhMTFiYWZmMDNjNjRiZDgwYzc5NWE3N2M4ZGFlYWQ0ZjVkZTE2MDU0YmMzMDc1MTY5MCIsInZlcnNpb24iOjF9.jRV58UxOM7CJJSsmxJuZvlt00jMGA1thp4aqtcFi1C8qViQ1kW7NYz8rg1gNTDZNez2UwPS1NgN_HnnwBHPbCQ
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 80.407
name: Exact Match
- type: f1
value: 88.942
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- type: exact_match
value: 23.533
name: Exact Match
- type: f1
value: 36.521
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_adversarial
type: squad_adversarial
config: AddOneSent
split: validation
metrics:
- type: exact_match
value: 73.867
name: Exact Match
- type: f1
value: 81.381
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts amazon
type: squadshifts
config: amazon
split: test
metrics:
- type: exact_match
value: 64.512
name: Exact Match
- type: f1
value: 80.166
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts new_wiki
type: squadshifts
config: new_wiki
split: test
metrics:
- type: exact_match
value: 76.568
name: Exact Match
- type: f1
value: 87.706
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts nyt
type: squadshifts
config: nyt
split: test
metrics:
- type: exact_match
value: 77.884
name: Exact Match
- type: f1
value: 87.858
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts reddit
type: squadshifts
config: reddit
split: test
metrics:
- type: exact_match
value: 64.399
name: Exact Match
- type: f1
value: 78.096
name: F1
---
# electra-base for QA
## Overview
**Language model:** electra-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
seed=42
batch_size = 32
n_epochs = 5
base_LM_model = "google/electra-base-discriminator"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 77.30144024256717,
"f1": 81.35438272008543,
"total": 11873,
"HasAns_exact": 74.34210526315789,
"HasAns_f1": 82.45961302894314,
"HasAns_total": 5928,
"NoAns_exact": 80.25231286795626,
"NoAns_f1": 80.25231286795626,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and lets people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and lets people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of a single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/electra-base-squad2")
# or
reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2")
```
## Authors
Vaishali Pal `vaishali.pal [at] deepset.ai`
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
vasista22/whisper-hindi-large-v2 | vasista22 | 2023-04-24T21:14:45Z | 683 | 46 | transformers | [
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-01-14T14:34:03Z | ---
language:
- hi
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Hindi Large-v2 - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: hi_in
split: test
metrics:
- type: wer
value: 6.8
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
metrics:
- type: wer
value: 10.98
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Hindi Large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Hindi data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-hindi-large-v2", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-hindi-large-v2", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [GramVaani ASR Corpus](https://sites.google.com/view/gramvaaniasrchallenge/dataset?authuser=0)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#hindi-labelled--total-duration-is-239876-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
Evaluation Data:
- [GramVaani ASR Corpus Test Set](https://sites.google.com/view/gramvaaniasrchallenge/dataset?authuser=0)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.75e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25000
- training_steps: 57000 (Initially set to 116255 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India. |
timm/convnext_tiny.in12k_ft_in1k_384 | timm | 2024-02-10T23:29:51Z | 683 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-12k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-01-18T20:12:06Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- imagenet-12k
---
# Model card for convnext_tiny.in12k_ft_in1k_384
A ConvNeXt image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman.
ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
Fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.6
- GMACs: 13.1
- Activations (M): 39.5
- Image size: 384 x 384
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_tiny.in12k_ft_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.in12k_ft_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 96, 96])
# torch.Size([1, 192, 48, 48])
# torch.Size([1, 384, 24, 24])
# torch.Size([1, 768, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_tiny.in12k_ft_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
Mr-Bhaskar/FBt | Mr-Bhaskar | 2024-05-21T19:08:43Z | 683 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:Mr-Bhaskar/Synthetic_Therapy_Conversations",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-24T08:50:19Z | ---
license: other
datasets:
- Mr-Bhaskar/Synthetic_Therapy_Conversations
---
---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thomasgauthier/Unmixtraled-22B-v0.1-expert-2 | thomasgauthier | 2024-04-12T16:44:39Z | 683 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mixtral",
"dense",
"expert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T02:57:13Z | ---
license: apache-2.0
tags:
- mixtral
- dense
- mistral
- expert
---
# Unmixtraled 22B expert 2
> [!WARNING]
> This model outputs gibberish as it was not trained under the dense configuration. Finetuning or merging is needed to make this model useful.
This is a 22B Mistral model recycling weights from [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1).
The model was adapted from a Mixtral architecture to a dense Mistral architecture with the same number of layers, attention heads and hidden dimensions.
Embeddings, attention, layer norms and LM head weights were taken directly from the 8x22B model, all MLP weights were taken from expert 2.
The following named weight correspondance was used:
| Mistral weight | Mixtral weight |
|----------------|----------------------------------|
| `gate_proj` | `experts.2.w1` |
| `down_proj` | `experts.2.w2` |
| `up_proj` | `experts.2.w3` |
## Unmixtraled models
| Expert | Source | Wikitext perplexity |
|--------|-----------------|---------------------|
| [Unmixtraled-22B-v0.1-expert-0](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-0) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 0 MLPs | 696.6932983398438 |
| [Unmixtraled-22B-v0.1-expert-1](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-1) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 1 MLPs | 6853.04248046875 |
| [**Unmixtraled-22B-v0.1-expert-2**](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-2) | **Mixtral 8x22B embed, attn, layernorm, lm_head + expert 2 MLPs** | **4689.181640625** |
| [Unmixtraled-22B-v0.1-expert-3](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-3) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 3 MLPs | 782.3755493164062 |
| [Unmixtraled-22B-v0.1-expert-4](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-4) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 4 MLPs | 2844.943603515625 |
| [Unmixtraled-22B-v0.1-expert-5](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-5) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 5 MLPs | 1099.32373046875 |
| [Unmixtraled-22B-v0.1-expert-6](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-6) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 6 MLPs | 341.5309753417969 |
| [Unmixtraled-22B-v0.1-expert-7](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-7) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 7 MLPs | 2099.63818359375 |
| [Unmixtraled-22B-v0.1-lerp](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-lerp) | Mixtral 8x22B embed, attn, layernorm, lm_head + linear merge of expert 0-7 MLPs | 1873.9874267578125 |
# Code
The following code was used to extract the experts and construct the dense models:
```python
# pip install -U transformers huggingface_hub "git+https://github.com/arcee-ai/mergekit@7467108c05d56ef2bb4b8f33936d437dc448f7dd"
import fnmatch
import json
import os
import re
import shutil
import torch
from huggingface_hub import snapshot_download
from mergekit.architecture import get_architecture_info
from mergekit.common import ModelReference
from mergekit.io import LazyTensorLoader, TensorWriter
from tqdm import tqdm
MIXTRAL_MODEL_ID = "mistral-community/Mixtral-8x22B-v0.1"
MIXTRAL_PATH = snapshot_download(repo_id=MIXTRAL_MODEL_ID)
print(f"Mixtral downloaded to: {MIXTRAL_PATH}")
MISTRAL_PATH = snapshot_download(
repo_id="mistralai/Mistral-7B-v0.1", allow_patterns=["config.json"]
)
print(f"Mistral config downloaded to: {MISTRAL_PATH}")
with open(os.path.join(MISTRAL_PATH, "config.json"), "r") as f:
mistral_config = json.load(f)
with open(os.path.join(MIXTRAL_PATH, "config.json"), "r") as f:
mixtral_config = json.load(f)
combined_config = {
key: mixtral_config[key] for key in mistral_config if key in mixtral_config
}
combined_config["architectures"] = ["MistralForCausalLM"]
combined_config["model_type"] = "mistral"
mixtral_model_ref = ModelReference.parse(MIXTRAL_PATH)
mixtral_architecture_info = get_architecture_info(mixtral_model_ref.config())
mixtral_loader = LazyTensorLoader(mixtral_model_ref.tensor_index(), lazy_unpickle=True)
ALLOW_LIST = ["generation_config.json", "tokenizer.model", "tokenizer_config.json"]
def copy_directory(src, dest, allowed_patterns):
os.makedirs(dest, exist_ok=True)
for root, dirs, files in os.walk(src):
# Only keep directories that match at least one of the allowed patterns
dirs[:] = [d for d in dirs if any(fnmatch.fnmatch(d, pattern) for pattern in allowed_patterns)]
for file in files:
# Only copy files that match at least one of the allowed patterns
if any(fnmatch.fnmatch(file, pattern) for pattern in allowed_patterns):
src_path = os.path.join(root, file)
dest_path = os.path.join(dest, os.path.relpath(src_path, src))
os.makedirs(os.path.dirname(dest_path), exist_ok=True)
shutil.copy2(src_path, dest_path)
def get_tensor(layer_num, expert_num, tensor_type):
weight_name = f"model.layers.{layer_num}.block_sparse_moe.experts.{expert_num}.{tensor_type}.weight"
return mixtral_loader.get_tensor(weight_name)
def extract_layer_number(string):
match = re.search(r"layers\.(\d+)\.", string)
return int(match.group(1)) if match else None
def save_expert_as_dense(output_path, expert_num):
dense_model_ref = ModelReference.parse(output_path)
dense_architecture_info = get_architecture_info(dense_model_ref.config())
writer = TensorWriter(output_path, safe_serialization=True)
for weight_info in tqdm(dense_architecture_info.all_weights(dense_model_ref.config())):
if weight_info.name.endswith(".up_proj.weight"):
layer_num = extract_layer_number(weight_info.name)
writer.save_tensor(weight_info.name, get_tensor(layer_num, expert_num, "w3"))
elif weight_info.name.endswith(".down_proj.weight"):
layer_num = extract_layer_number(weight_info.name)
writer.save_tensor(weight_info.name, get_tensor(layer_num, expert_num, "w2"))
elif weight_info.name.endswith(".gate_proj.weight"):
layer_num = extract_layer_number(weight_info.name)
writer.save_tensor(weight_info.name, get_tensor(layer_num, expert_num, "w1"))
else:
writer.save_tensor(weight_info.name, mixtral_loader.get_tensor(weight_info.name))
writer.finalize()
num_experts = mixtral_config["num_local_experts"]
for expert_num in range(num_experts):
dense_path = f"./dense_expert_{expert_num}"
copy_directory(MIXTRAL_PATH, dense_path, ALLOW_LIST)
with open(os.path.join(dense_path, "config.json"), "w") as f:
json.dump(combined_config, f, indent=2)
save_expert_as_dense(dense_path, expert_num)
print(f"Dense model #{expert_num} saved to {os.path.abspath(dense_path)}")
``` |
DreadPoor/Siren-7B-slerp | DreadPoor | 2024-04-15T02:37:55Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DreadPoor/Harpy-7B-Model_Stock",
"DreadPoor/Nynph-7B-Model_Stock",
"base_model:DreadPoor/Harpy-7B-Model_Stock",
"base_model:DreadPoor/Nynph-7B-Model_Stock",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T04:45:08Z | ---
tags:
- merge
- mergekit
- lazymergekit
- DreadPoor/Harpy-7B-Model_Stock
- DreadPoor/Nynph-7B-Model_Stock
base_model:
- DreadPoor/Harpy-7B-Model_Stock
- DreadPoor/Nynph-7B-Model_Stock
license: apache-2.0
---
# Siren-7B-slerp
Siren-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DreadPoor/Harpy-7B-Model_Stock](https://huggingface.co/DreadPoor/Harpy-7B-Model_Stock)
* [DreadPoor/Nynph-7B-Model_Stock](https://huggingface.co/DreadPoor/Nynph-7B-Model_Stock)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: DreadPoor/Harpy-7B-Model_Stock
layer_range: [0, 32]
- model: DreadPoor/Nynph-7B-Model_Stock
layer_range: [0, 32]
merge_method: slerp
base_model: DreadPoor/Harpy-7B-Model_Stock
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DreadPoor/Siren-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jpquiroga/Mistral_7B_ties_merge_instruct_open_orca_codeninja | jpquiroga | 2024-04-12T10:49:24Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-12T09:12:35Z | ---
base_model:
- beowolx/CodeNinja-1.0-OpenChat-7B
- mistralai/Mistral-7B-v0.1
- mistralai/Mistral-7B-Instruct-v0.1
- Open-Orca/Mistral-7B-OpenOrca
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
density: 0.3
weight: 0.33
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.3
weight: 0.33
- model: beowolx/CodeNinja-1.0-OpenChat-7B
parameters:
density: 0.3
weight: 0.34
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
``` |
Noodlz/Dolph-Lund-Wizard-7B | Noodlz | 2024-04-17T08:42:46Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T07:06:17Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Dolph-Lund -Wizard-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest as a base.
### Models Merged
The following models were included in the merge:
* /Users/etherops1/AI/Not-WizardLM-2-7B
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
parameters:
int8_mask: true
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
embed_slerp: true
models:
- model: /Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest
# No parameters necessary for base model
- model: /Users/etherops1/AI/Not-WizardLM-2-7B
parameters:
density: 0.58
weight: 0.4
base_model: /Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest
tokenizer_source: model:/Users/etherops1/AI/Noodlz/Noodlz_DolphinLake-DARE_TIE_SLERP-tokenwest
dtype: bfloat16
``` |
allknowingroger/CeptrixBeagle-12B-MoE | allknowingroger | 2024-04-17T07:55:55Z | 683 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/NeuralCeptrix-7B-slerp",
"paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:allknowingroger/NeuralCeptrix-7B-slerp",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T07:48:27Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- allknowingroger/NeuralCeptrix-7B-slerp
- paulml/OmniBeagleSquaredMBX-v3-7B
base_model:
- allknowingroger/NeuralCeptrix-7B-slerp
- paulml/OmniBeagleSquaredMBX-v3-7B
---
# CeptrixBeagle-12B-MoE
CeptrixBeagle-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/NeuralCeptrix-7B-slerp](https://huggingface.co/allknowingroger/NeuralCeptrix-7B-slerp)
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
## 🧩 Configuration
```yaml
base_model: allknowingroger/NeuralCeptrix-7B-slerp
experts:
- source_model: allknowingroger/NeuralCeptrix-7B-slerp
positive_prompts: ["what"]
- source_model: paulml/OmniBeagleSquaredMBX-v3-7B
positive_prompts: ["why"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/CeptrixBeagle-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Kquant03/DolphinHermesPro-ModelStock | Kquant03 | 2024-04-18T22:44:39Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T21:14:15Z | ---
tags:
- merge
- mergekit
- lazymergekit
license: apache-2.0
thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/Jmu5DHPZwv4so5Tn-xkIO.png"
---
# DolphinHermesPro-ModelStock

DolphinHermesPro-ModelStock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
```yaml
models:
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
- model: NousResearch/Hermes-2-Pro-Mistral-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kquant03/DolphinHermesPro-ModelStock"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CreitinGameplays/ConvAI-9b | CreitinGameplays | 2024-05-27T12:36:18Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:CreitinGameplays/merged-data-v2",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-18T16:14:34Z | ---
license: mit
datasets:
- CreitinGameplays/merged-data-v2
base_model:
- HuggingFaceH4/zephyr-7b-beta
- mistral-community/Mistral-7B-v0.2
language:
- en
---
# **ConvAI-9b: A Conversational AI Model**

## **1. Model Details**
* **Model Name:** ConvAI-9b
* **Authors:** CreitinGameplays
* **Date:** April 18th, 2024
## **2. Model Description**
ConvAI-9b is a fine-tuned conversational AI model with 9 billion parameters. It is based on the following models:
* **Base Model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* **Merged Model:** [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2)
## **3. Training Data**
The model was fine-tuned on a custom dataset of conversations between an AI assistant and a user. The dataset format followed a specific structure:
```
<|system|> (system prompt, e.g.: You are a helpful AI language model called ChatGPT, your goal is helping users with their questions) </s> <|user|> (user prompt) </s>
```
## **4. Intended Uses**
ConvAI-9b is intended for use in conversational AI applications, such as:
* Chatbots
* Virtual assistants
* Interactive storytelling
* Educational tools
## **5. Limitations**
* Like any other language model, ConvAI-9b may generate incorrect or misleading responses.
* It may exhibit biases present in the training data.
* The model's performance can be affected by the quality and format of the input text.
## **6. Evaluation**
| Metrics |Value|
|----------|-----|
|ARC |57.50|
|HellaSwag |80.34|
|TruthfulQA|49.54|
|Winogrande|76.24|
More detailed evaluation [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CreitinGameplays__ConvAI-9b)
|
allknowingroger/Llama3merge7-15B-MoE | allknowingroger | 2024-04-22T09:02:35Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralLlamita-3-8B-v0.2",
"cognitivecomputations/dolphin-2.9-llama3-8b",
"conversational",
"base_model:Kukedlc/NeuralLlamita-3-8B-v0.2",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-22T08:54:50Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralLlamita-3-8B-v0.2
- cognitivecomputations/dolphin-2.9-llama3-8b
base_model:
- Kukedlc/NeuralLlamita-3-8B-v0.2
- cognitivecomputations/dolphin-2.9-llama3-8b
---
# Llama3merge7-15B-MoE
Llama3merge7-15B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralLlamita-3-8B-v0.2](https://huggingface.co/Kukedlc/NeuralLlamita-3-8B-v0.2)
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
## 🧩 Configuration
```yaml
base_model: Kukedlc/NeuralLlamita-3-8B-v0.2
experts:
- source_model: Kukedlc/NeuralLlamita-3-8B-v0.2
positive_prompts: ["why"]
- source_model: cognitivecomputations/dolphin-2.9-llama3-8b
positive_prompts: ["what"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Llama3merge7-15B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
chujiezheng/zephyr_0.2 | chujiezheng | 2024-04-28T05:25:35Z | 683 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:2404.16792",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-25T21:47:17Z | ---
license: apache-2.0
language:
- en
---
# zephyr_0.2
The DPO-trained model from `alignment-handbook/zephyr-7b-sft-full` using 20% data of `HuggingFaceH4/ultrafeedback_binarized`, as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
|
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3 | ShenaoZhang | 2024-04-27T02:34:10Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-27T02:01:33Z | ---
license: mit
base_model: ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_4iters_bs256_nodpo_only4w_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_iter_3
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
grimjim/llama-3-experiment-v1-9B | grimjim | 2024-04-30T04:08:10Z | 683 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"pytorch",
"mergekit",
"merge",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-28T02:15:41Z | ---
language:
- en
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- meta
- llama-3
- pytorch
- mergekit
- merge
license: llama3
license_link: LICENSE
pipeline_tag: text-generation
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Corwin! How are you?
- example_title: Hellriding out of Amber
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and truthfully.
- role: user
content: Can you recommend a good destination for a hellride out of Amber?
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
model-index:
- name: grimjim/grimjim/llama-3-experiment-v1-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/grimjim/llama-3-experiment-v1-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 78.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/llama-3-experiment-v1-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/llama-3-experiment-v1-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/llama-3-experiment-v1-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/llama-3-experiment-v1-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/llama-3-experiment-v1-9B
name: Open LLM Leaderboard
---
# llama-3-experiment-v1-9B
This is an experimental merge, replicating additional layers to the model without post-merge healing.
There is damage to the model, but it appears to be tolerable as is; the performance difference in benchmarks from the original 8B Instruct model does not appear to be significant.
The resulting impact on narrative text completion may also be of interest.
Light testing performed with instruct prompting and the following sampler settings:
- temp=1 and minP=0.02
- temp=1 and smoothing factor=0.33
Full weights: [grimjim/llama-3-experiment-v1-9B](https://huggingface.co/grimjim/llama-3-experiment-v1-9B)
GGUF quants: [grimjim/llama-3-experiment-v1-9B-GGUF](https://huggingface.co/grimjim/llama-3-experiment-v1-9B-GGUF)
This is a merge of pre-trained language model meta-llama/Meta-Llama-3-8B-Instruct created using [mergekit](https://github.com/cg123/mergekit).
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* meta-llama/Meta-Llama-3-8B-Instruct
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 12]
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
|
jarod0411/linker_v3 | jarod0411 | 2024-05-02T03:09:08Z | 683 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:jarod0411/linker_v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-28T23:12:00Z | ---
license: mit
base_model: jarod0411/linker_v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: linker_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# linker_v3
This model is a fine-tuned version of [jarod0411/linker_v2](https://huggingface.co/jarod0411/linker_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2291
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.2532 | 1.0 | 33912 | 0.2459 | 0.9160 |
| 0.2433 | 2.0 | 67824 | 0.2392 | 0.9179 |
| 0.2384 | 3.0 | 101736 | 0.2358 | 0.9188 |
| 0.2354 | 4.0 | 135648 | 0.2336 | 0.9195 |
| 0.2331 | 5.0 | 169560 | 0.2322 | 0.9199 |
| 0.2307 | 6.0 | 203472 | 0.2311 | 0.9202 |
| 0.2293 | 7.0 | 237384 | 0.2303 | 0.9205 |
| 0.2282 | 8.0 | 271296 | 0.2297 | 0.9207 |
| 0.2274 | 9.0 | 305208 | 0.2293 | 0.9208 |
| 0.2269 | 10.0 | 339120 | 0.2291 | 0.9209 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bond005/whisper-large-v3-ru-podlodka | bond005 | 2024-05-22T15:46:33Z | 683 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ru",
"dataset:bond005/taiga_speech_v2",
"dataset:bond005/podlodka_speech",
"dataset:bond005/rulibrispeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-02T06:58:20Z | ---
license: apache-2.0
widget:
- example_title: Нейронные сети - это хорошо!
src: >-
https://huggingface.co/bond005/whisper-large-v3-ru-podlodka/resolve/main/test_sound_ru.flac
- example_title: >-
К сожалению, система распознавания речи не всегда стабильна, особенно в
шумных условиях.
src: >-
https://huggingface.co/bond005/whisper-large-v3-ru-podlodka/resolve/main/test_sound_with_noise.wav
- example_title: >-
Мимо театра мальчик ходил довольно часто — белое, со взбитыми сливками,
здание-торт.
src: >-
https://huggingface.co/bond005/whisper-large-v3-ru-podlodka/resolve/main/anna_matveeva_test.wav
datasets:
- bond005/taiga_speech_v2
- bond005/podlodka_speech
- bond005/rulibrispeech
language:
- ru
pipeline_tag: automatic-speech-recognition
metrics:
- wer
model-index:
- name: Whisper Large V3 Russian Podlodka by Ivan Bondarenko
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Podlodka.io
type: bond005/podlodka_speech
args: ru
metrics:
- name: WER (with punctuation and capital letters)
type: wer
value: 20.910
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Podlodka.io
type: bond005/podlodka_speech
args: ru
metrics:
- name: WER (without punctuation)
type: wer
value: 10.987
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Russian Librispeech
type: bond005/rulibrispeech
args: ru
metrics:
- name: WER (without punctuation)
type: wer
value: 9.795
--- |
Cesco2004/TW3CESCO.V1 | Cesco2004 | 2024-05-02T16:12:42Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:chihoonlee10/T3Q-Mistral-UB-DPO-v1.0",
"base_model:paulml/NeuralOmniWestBeaglake-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-02T15:46:05Z | ---
base_model:
- chihoonlee10/T3Q-Mistral-UB-DPO-v1.0
- paulml/NeuralOmniWestBeaglake-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [chihoonlee10/T3Q-Mistral-UB-DPO-v1.0](https://huggingface.co/chihoonlee10/T3Q-Mistral-UB-DPO-v1.0)
* [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: chihoonlee10/T3Q-Mistral-UB-DPO-v1.0
layer_range: [0, 32]
- model: paulml/NeuralOmniWestBeaglake-7B
layer_range: [0, 32]
merge_method: slerp # This should not be indented under 'sources'
base_model: paulml/NeuralOmniWestBeaglake-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
ibivibiv/llama3-8b-ultrafeedback-dpo | ibivibiv | 2024-05-02T18:00:41Z | 683 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-02T17:19:53Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_1 | ShenaoZ | 2024-05-05T07:20:13Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-05T06:44:47Z | ---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: 0.0005_withdpo_4iters_bs256_555lr_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0005_withdpo_4iters_bs256_555lr_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
shyamieee/J4RVIZ-v5.0 | shyamieee | 2024-05-06T09:52:33Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-06T08:31:49Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# j4rviz_v5_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using bophades-mistral-truthy-DPO-7B as a base.
### Models Merged
The following models were included in the merge:
* multi_verse_model
* Calme-7B-Instruct-v0.9
### Configuration
|
Mr-Bhaskar/fbt-mistral-7b | Mr-Bhaskar | 2024-05-21T18:56:25Z | 683 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-10T17:19:52Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gabifg/Grypho-ties-7b | gabifg | 2024-05-12T15:50:16Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"nbeerbower/bophades-mistral-math-DPO-7B",
"Danielbrdz/Barcenas-Mistral-7b",
"base_model:nbeerbower/bophades-mistral-math-DPO-7B",
"base_model:Danielbrdz/Barcenas-Mistral-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-11T14:44:47Z | ---
tags:
- merge
- mergekit
- lazymergekit
- nbeerbower/bophades-mistral-math-DPO-7B
- Danielbrdz/Barcenas-Mistral-7b
base_model:
- nbeerbower/bophades-mistral-math-DPO-7B
- Danielbrdz/Barcenas-Mistral-7b
license: apache-2.0
---
# Grypho-ties-7b
Grypho-ties-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [nbeerbower/bophades-mistral-math-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-math-DPO-7B)
* [Danielbrdz/Barcenas-Mistral-7b](https://huggingface.co/Danielbrdz/Barcenas-Mistral-7b)
## 🧩 Configuration
```yaml
models:
- model: OpenPipe/mistral-ft-optimized-1218
# no parameters necessary for base model
- model: nbeerbower/bophades-mistral-math-DPO-7B
parameters:
density: 0.5
weight: 0.5
- model: Danielbrdz/Barcenas-Mistral-7b
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gabifg/Grypho-ties-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
shyamieee/Padma-SLM-7b-v1.0 | shyamieee | 2024-05-15T06:00:03Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-11T16:22:53Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Padma_SLM_7b_v1_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using bophades-mistral-truthy-DPO-7B as a base.
### Models Merged
The following models were included in the merge:
* multi_verse_model
* YamshadowExperiment28-7B
* Calme-7B-Instruct-v0.9
### Configuration
|
GeorgiaTech/0.0_llama_nodpo_3iters_bs128_531lr_iter_1 | GeorgiaTech | 2024-05-12T01:23:04Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T00:05:53Z | ---
license: other
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0_llama_nodpo_3iters_bs128_531lr_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_llama_nodpo_3iters_bs128_531lr_iter_1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
Edgerunners/yi-9b-may-ortho-baukit-30fail-3000total-bf16 | Edgerunners | 2024-05-12T20:51:06Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T18:43:21Z | ---
license: cc-by-nc-4.0
---
new 9b-yi released in may
test results: refusal removal worked, but yi 9b chat is still kind of bad, ortho won't fix that; but judge for yourself
this version had only 30 refusals out of 3000 ortho-tests, in-line with the others in terms of refusals.
---
wassname (updated baukit) implementation of the paper: https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction
applied to llama3 8b instruct
1. The Model is meant purely for alignment research and exploration of alignmentforum theory
2. The Model is provided ""AS IS"" and ""AS AVAILABLE"" without warranty of any kind, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title, or non-infringement.
3. The Provider disclaims all liability for any damages or losses resulting from the use or misuse of the Model, including but not limited to any damages or losses arising from the use of the Model for purposes other than those intended by the Provider.
4. The Provider does not endorse or condone the use of the Model for any purpose that violates applicable laws, regulations, or ethical standards.
5. The Provider does not warrant that the Model will meet your specific requirements or that it will be error-free or that it will function without interruption.
6. You assume all risks associated with the use of the Model, including but not limited to any loss of data, loss of business, or damage to your reputation. |
p208p2002/llama-3-zhtw-8B | p208p2002 | 2024-05-31T08:48:05Z | 683 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"zh",
"dataset:HuggingFaceFW/fineweb",
"dataset:erhwenkuo/c4-chinese-zhtw",
"dataset:erhwenkuo/wikipedia-zhtw",
"dataset:p208p2002/wudao",
"dataset:p208p2002/NDLTD-T10-90-111",
"dataset:codeparrot/github-code-clean",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-13T01:32:13Z | ---
datasets:
- HuggingFaceFW/fineweb
- erhwenkuo/c4-chinese-zhtw
- erhwenkuo/wikipedia-zhtw
- p208p2002/wudao
- p208p2002/NDLTD-T10-90-111
- codeparrot/github-code-clean
language:
- en
- zh
license: llama3
---
# Llama 3 zhtw
在 Llama 3 上試驗中文 Continue Pretraining (CP),共計訓練 800M tokens。
由於中文預訓練語料品質還有改進空間,CP 後表現未能超越原版 Llama 3,我們比較幾個開源社群訓練的中文 Llama 3 也有類似狀況。
在英文方面 LLaMA 3 zhtw 使用 FineWeb,使得 MMLU 表現高於其他中文CP模型,能力與原版 LLaMA 3 持平。
## Benchmarks
| Models | | ↑ TMMLU+ (ACC) | CMMLU (ACC) | MMLU (ACC) |
| ---------------------------- | --- | -------------- | ------------- | ------------- |
| | | TC, Knowledge | CN, Knowledge | EN, Knowledge |
| | | 5 shot | 5 shot | 5 shot |
| Yi-6B | 6B | 49.63 | 75.53 | 65.35 |
| Qwen-7B | 7B | 42.84 | 73.1 | 61.00 |
| Meta-Llama-3-8B | 8B | 41.97 | 50.8 | 65.17 |
| **p208p2002/llama-3-zhtw-8B** | 8B | 41.84 | 50.6 | 65.31 |
| Breeze-7B-Base-v0_1 | 7B | 40.35 | 44.05 | 61.63 |
| hfl/llama-3-chinese-8b | 8B | 39.64 | 50.9 | 61.1 |
## Recipe
### Datasets
| Dataset | Lang | Weight |
|----------------|-------------|--------|
| FineWeb | en | 0.35 |
| Wudao | zh-cn | 0.1 |
| C4Tw | zh-tw | 0.1 |
| WikiZhTw | zh-tw | 0.15 |
| NdltdT10 | zh-tw | 0.1 |
| GitHubMarkDown | code | 0.1 |
| GitHubPython | code | 0.1 |
### Hyper Parameters
- Learning Rate: 1e-7
- Global Batch Size: 60
- Sequence Length: 8192 |
hon9kon9ize/yi-1.5-6b-yue-vocab-expanded | hon9kon9ize | 2024-06-08T11:15:32Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"freeze",
"generated_from_trainer",
"conversational",
"base_model:01-ai/Yi-1.5-6B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-14T12:17:13Z | ---
license: other
base_model: 01-ai/Yi-1.5-6B
tags:
- llama-factory
- freeze
- generated_from_trainer
model-index:
- name: yi-1.5-6b-yub-vocab-expanded
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yi-1.5-6b-yub-vocab-expanded
This model is a fine-tuned version of [01-ai/Yi-1.5-6B](https://huggingface.co/01-ai/Yi-1.5-6B) undergone layers freezeing learning on the 300m tokens Cantonese dataset, in order to train a new words embedding in the expanded vocab. This model has not been continued pre-trainined, therefore it is not recommended to be used for further pre-training.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.19.1
|
Josephgflowers/TinyLlama-Cinder-Math-Train | Josephgflowers | 2024-05-19T13:31:22Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Josephgflowers/TinyLlama-Cinder-Agent-Rag",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-16T05:37:02Z | ---
license: mit
base_model: Josephgflowers/TinyLlama-Cinder-Agent-Rag
tags:
- generated_from_trainer
model-index:
- name: TinyLlama-Cinder-Math-Train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyLlama-Cinder-Math-Train
This model is a fine-tuned version of [Josephgflowers/TinyLlama-Cinder-Agent-Rag](https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-Rag) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
allknowingroger/Neuralcoven-7B-slerp | allknowingroger | 2024-05-17T12:18:05Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/Neurallaymons-7B-slerp",
"raidhon/coven_7b_128k_orpo_alpha",
"base_model:allknowingroger/Neurallaymons-7B-slerp",
"base_model:raidhon/coven_7b_128k_orpo_alpha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-17T12:13:25Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/Neurallaymons-7B-slerp
- raidhon/coven_7b_128k_orpo_alpha
base_model:
- allknowingroger/Neurallaymons-7B-slerp
- raidhon/coven_7b_128k_orpo_alpha
license: apache-2.0
---
# Neuralcoven-7B-slerp
Neuralcoven-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/Neurallaymons-7B-slerp](https://huggingface.co/allknowingroger/Neurallaymons-7B-slerp)
* [raidhon/coven_7b_128k_orpo_alpha](https://huggingface.co/raidhon/coven_7b_128k_orpo_alpha)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/Neurallaymons-7B-slerp
layer_range: [0, 32]
- model: raidhon/coven_7b_128k_orpo_alpha
layer_range: [0, 32]
merge_method: slerp
base_model: allknowingroger/Neurallaymons-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Neuralcoven-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
netcat420/MFANN3bv0.10 | netcat420 | 2024-05-21T18:22:18Z | 683 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-classification",
"en",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-05-21T05:33:12Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
language:
- en
pipeline_tag: text-classification
---
MFANN 3b version 0.10

this model is fine-tuned on the MFANN dataset as of 5/21/24 and uses MFANN3bv0.9.10 as a base model.
SYSTEM PROMPT:
Instruct: {instruction}
Output: |
RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf | RichardErkhov | 2024-06-04T23:16:40Z | 683 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-04T23:08:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-finetuned-recipes-cooking_v2 - GGUF
- Model creator: https://huggingface.co/mrm8488/
- Original model: https://huggingface.co/mrm8488/gpt2-finetuned-recipes-cooking_v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-finetuned-recipes-cooking_v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q2_K.gguf) | Q2_K | 0.06GB |
| [gpt2-finetuned-recipes-cooking_v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.IQ3_S.gguf) | IQ3_S | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.IQ3_M.gguf) | IQ3_M | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q3_K.gguf) | Q3_K | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [gpt2-finetuned-recipes-cooking_v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q4_0.gguf) | Q4_0 | 0.08GB |
| [gpt2-finetuned-recipes-cooking_v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
| [gpt2-finetuned-recipes-cooking_v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
| [gpt2-finetuned-recipes-cooking_v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q4_K.gguf) | Q4_K | 0.08GB |
| [gpt2-finetuned-recipes-cooking_v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
| [gpt2-finetuned-recipes-cooking_v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q4_1.gguf) | Q4_1 | 0.08GB |
| [gpt2-finetuned-recipes-cooking_v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q5_0.gguf) | Q5_0 | 0.09GB |
| [gpt2-finetuned-recipes-cooking_v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q5_K_S.gguf) | Q5_K_S | 0.09GB |
| [gpt2-finetuned-recipes-cooking_v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q5_K.gguf) | Q5_K | 0.09GB |
| [gpt2-finetuned-recipes-cooking_v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
| [gpt2-finetuned-recipes-cooking_v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q5_1.gguf) | Q5_1 | 0.09GB |
| [gpt2-finetuned-recipes-cooking_v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q6_K.gguf) | Q6_K | 0.1GB |
| [gpt2-finetuned-recipes-cooking_v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_gpt2-finetuned-recipes-cooking_v2-gguf/blob/main/gpt2-finetuned-recipes-cooking_v2.Q8_0.gguf) | Q8_0 | 0.12GB |
Original model description:
---
language: en
thumbnail:
widget:
- text: "HuggingFace Cake:"
---
|
gaianet/Qwen2-1.5B-Instruct-GGUF | gaianet | 2024-06-07T04:54:05Z | 683 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-07T04:44:55Z | ---
base_model: Qwen/Qwen2-1.5B-Instruct
license: apache-2.0
model_creator: Qwen
model_name: Qwen2-1.5B-Instruct
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---

# Qwen2-1.5B-Instruct-GGUF
## Original Model
[Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct)
## Run with Gaianet
**Prompt template**
prompt template: `chatml`
**Context size**
chat_ctx_size: `32000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
|
awnr/Mistral-7B-v0.1-signtensors-1-over-4 | awnr | 2024-06-27T10:36:45Z | 683 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-27T02:58:09Z | ---
license: apache-2.0
---
# Model Card for Model Mistral-7B-v0.1-5-over-16
I'm experimenting with the weight matrices in neural networks.
This is a clone of `Mistral-7B-v0.1` with some weight matrices replaced.
I'm interested in seeing how the adjustmenets affect performance on existing metrics.
## Model Details
Research in progress! Demons could come out of your nose if you use this.
### Model Description
A modification of [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1).
Thanks to their team for sharing their model.
- **Modified by:** Dr. Alex W. Neal Riasanovsky
- **Model type:** pre-trained
- **Language(s) (NLP):** English
- **License:** Apache-2.0
## Bias, Risks, and Limitations
Use your own risk.
I have no idea what this model's biases and limitations are.
I just want to see if the benchmark values are similar to those from `Mistral-7B-v0.1`.
I am setting up a long computational experiment to test some ideas.
|
lcw99/t5-base-korean-text-summary | lcw99 | 2023-04-13T02:30:33Z | 682 | 9 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-09-24T05:23:31Z | ---
language:
- ko
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-korean-text-summary
results: []
---
# t5-base-korean-text-summary
This model is a fine-tuning of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) model using AIHUB "summary and report generation data". This model provides a short summary of long sentences in Korean.
이 모델은 paust/pko-t5-base model을 AIHUB "요약문 및 레포트 생성 데이터"를 이용하여 fine tunning 한 것입니다. 이 모델은 한글로된 장문을 짧게 요약해 줍니다.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import nltk
nltk.download('punkt')
model_dir = "lcw99/t5-base-korean-text-summary"
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir)
max_input_length = 512
text = """
주인공 강인구(하정우)는 ‘수리남에서 홍어가 많이 나는데 다 갖다버린다’는 친구
박응수(현봉식)의 얘기를 듣고 수리남산 홍어를 한국에 수출하기 위해 수리남으로 간다.
국립수산과학원 측은 “실제로 남대서양에 홍어가 많이 살고 아르헨티나를 비롯한 남미 국가에서 홍어가 많이 잡힌다”며
“수리남 연안에도 홍어가 많이 서식할 것”이라고 설명했다.
그러나 관세청에 따르면 한국에 수리남산 홍어가 수입된 적은 없다.
일각에선 “돈을 벌기 위해 수리남산 홍어를 구하러 간 설정은 개연성이 떨어진다”는 지적도 한다.
드라마 배경이 된 2008~2010년에는 이미 국내에 아르헨티나, 칠레, 미국 등 아메리카산 홍어가 수입되고 있었기 때문이다.
실제 조봉행 체포 작전에 협조했던 ‘협력자 K씨’도 홍어 사업이 아니라 수리남에 선박용 특수용접봉을 파는 사업을 하러 수리남에 갔었다.
"""
inputs = ["summarize: " + text]
inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10, max_length=100)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
predicted_title = nltk.sent_tokenize(decoded_output.strip())[0]
print(predicted_title)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float16
### Training results
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
timm/convnext_large.fb_in22k | timm | 2024-02-10T23:27:04Z | 682 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-13T07:09:19Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-22k
---
# Model card for convnext_large.fb_in22k
A ConvNeXt image classification model. Pretrained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 229.8
- GMACs: 34.4
- Activations (M): 43.1
- Image size: 224 x 224
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_large.fb_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large.fb_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 192, 56, 56])
# torch.Size([1, 384, 28, 28])
# torch.Size([1, 768, 14, 14])
# torch.Size([1, 1536, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large.fb_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
UFNLP/gatortron-medium | UFNLP | 2024-03-19T00:24:44Z | 682 | 19 | transformers | [
"transformers",
"pytorch",
"megatron-bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-02T23:50:42Z | ---
license: apache-2.0
---
<h2>GatorTron-Medium overview </h2>
Developed by a joint effort between the University of Florida and NVIDIA, GatorTron-Medium is a clinical language model of 3.9 billion parameters, pre-trained using a BERT architecure implemented in the Megatron package (https://github.com/NVIDIA/Megatron-LM).
GatorTron-Medium is pre-trained using a dataset consisting of:
- 82B words of de-identified clinical notes from the University of Florida Health System,
- 6.1B words from PubMed CC0,
- 2.5B words from WikiText,
- 0.5B words of de-identified clinical notes from MIMIC-III
The Github for GatorTron is at : https://github.com/uf-hobi-informatics-lab/GatorTron
<h2>Model variations</h2>
Model | Parameter
--- | ---
[gatortron-base](https://huggingface.co/UFNLP/gatortron-base)| 345 million
[gatortronS](https://huggingface.co/UFNLP/gatortronS) | 345 million
[gatortron-medium (this model)](https://huggingface.co/UFNLP/gatortron-medium) | 3.9 billion
[gatortron-large](https://huggingface.co/UFNLP/gatortron-large) | 8.9 billion | 8.9 billion
<h2>How to use</h2>
```python
from transformers import AutoModel, AutoTokenizer, AutoConfig
tokinizer= AutoTokenizer.from_pretrained('UFNLP/gatortron-medium')
config=AutoConfig.from_pretrained('UFNLP/gatortron-medium')
mymodel=AutoModel.from_pretrained('UFNLP/gatortron-medium')
encoded_input=tokinizer("Bone scan: Negative for distant metastasis.", return_tensors="pt")
encoded_output = mymodel(**encoded_input)
```
- An NLP pacakge using GatorTron for clinical concept extraction (Named Entity Recognition): https://github.com/uf-hobi-informatics-lab/ClinicalTransformerNER
- An NLP pacakge using GatorTron for Relation Extraction: https://github.com/uf-hobi-informatics-lab/ClinicalTransformerRelationExtraction
- An NLP pacakge using GatorTron for extraction of social determinants of health (SDoH) from clinical narratives: https://github.com/uf-hobi-informatics-lab/SDoH_SODA
<h2>De-identification</h2>
We applied a de-identification system to remove protected health information (PHI) from clinical text. We adopted the safe-harbor method to identify 18 PHI categories defined in the Health Insurance Portability and Accountability Act (HIPAA) and replaced them with dummy strings (e.g., replace people’s names into [\*\*NAME\*\*]).
The de-identifiation system is described in:
Yang X, Lyu T, Li Q, Lee C-Y, Bian J, Hogan WR, Wu Y†. A study of deep learning methods for de-identification of clinical notes in cross-institute settings. BMC Med Inform Decis Mak. 2020 Dec 5;19(5):232. https://www.ncbi.nlm.nih.gov/pubmed/31801524.
<h2>Citation info</h2>
Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, Compas C, Martin C, Costa AB, Flores MG, Zhang Y, Magoc T, Harle CA, Lipori G, Mitchell DA, Hogan WR, Shenkman EA, Bian J, Wu Y†. A large language model for electronic health records. Npj Digit Med. Nature Publishing Group; . 2022 Dec 26;5(1):1–9. https://www.nature.com/articles/s41746-022-00742-2
- BibTeX entry
```
@article{yang2022large,
title={A large language model for electronic health records},
author={Yang, Xi and Chen, Aokun and PourNejatian, Nima and Shin, Hoo Chang and Smith, Kaleb E and Parisien, Christopher and Compas, Colin and Martin, Cheryl and Costa, Anthony B and Flores, Mona G and Zhang, Ying and Magoc, Tanja and Harle, Christopher A and Lipori, Gloria and Mitchell, Duane A and Hogan, William R and Shenkman, Elizabeth A and Bian, Jiang and Wu, Yonghui },
journal={npj Digital Medicine},
volume={5},
number={1},
pages={194},
year={2022},
publisher={Nature Publishing Group UK London}
}
```
<h2>Contact</h2>
- Yonghui Wu: [email protected]
- Cheng Peng: [email protected] |
TheBloke/Llama-2-13B-Chat-Dutch-GGUF | TheBloke | 2023-09-27T12:48:52Z | 682 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"generated_from_trainer",
"lora",
"adapters",
"nl",
"dataset:BramVanroy/dutch_chat_datasets",
"base_model:BramVanroy/Llama-2-13b-chat-dutch",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-09-12T12:02:35Z | ---
language:
- nl
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
- llama
- lora
- adapters
datasets:
- BramVanroy/dutch_chat_datasets
base_model: BramVanroy/Llama-2-13b-chat-dutch
inference: false
model_creator: Bram Vanroy
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
model-index:
- name: Llama-2-13b-chat-dutch
results: []
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 13B Chat Dutch - GGUF
- Model creator: [Bram Vanroy](https://huggingface.co/BramVanroy)
- Original model: [Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Bram Vanroy's Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF)
* [Bram Vanroy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-sa-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Bram Vanroy's Llama 2 13B Chat Dutch](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-13b-chat-dutch.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-13b-chat-dutch.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llama-2-13b-chat-dutch.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llama-2-13b-chat-dutch.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llama-2-13b-chat-dutch.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-13b-chat-dutch.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [llama-2-13b-chat-dutch.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llama-2-13b-chat-dutch.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-13b-chat-dutch.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llama-2-13b-chat-dutch.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llama-2-13b-chat-dutch.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llama-2-13b-chat-dutch.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Chat-Dutch-GGUF/blob/main/llama-2-13b-chat-dutch.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-13B-Chat-Dutch-GGUF and below it, a specific filename to download, such as: llama-2-13b-chat-dutch.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-13B-Chat-Dutch-GGUF llama-2-13b-chat-dutch.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-13B-Chat-Dutch-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-13B-Chat-Dutch-GGUF llama-2-13b-chat-dutch.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-13b-chat-dutch.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n{prompt}[/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-13B-Chat-Dutch-GGUF", model_file="llama-2-13b-chat-dutch.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Bram Vanroy's Llama 2 13B Chat Dutch
# Llama-2-13b-chat-dutch
This model is a fine-tuned version of [BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny](https://huggingface.co/BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny)
on the [BramVanroy/dutch_chat_datasets](https://huggingface.co/datasets/BramVanroy/dutch_chat_datasets) dataset on a context of 4096 tokens.
See the original [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) for more information, intended use, and biases.
If you use this model or refer to it, please use the following citation:
Bram Vanroy. (2023). Llama v2 13b: Finetuned on Dutch Conversational Data. Hugging Face. https://doi.org/10.57967/HF/1018
```bibtex
@misc{https://doi.org/10.57967/hf/1018,
doi = {10.57967/HF/1018},
url = {https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch},
author = {{Bram Vanroy}},
title = {{Llama} v2 13b: {Finetuned} on {Dutch} Conversational Data},
publisher = {{Hugging} {Face}},
year = {2023}
}
```
## Model description
I could not get the original Llama 2 13B to produce much Dutch, even though the description paper indicates that it was trained on a (small) portion of Dutch data. I therefore
continued training the original Llama 2 13B checkpoint on Dutch data [in regular CLM](https://huggingface.co/BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny). In a second
step I finetuned that model on a collection of synthetic (translated) instruction and chat datasets that I have [collected](https://huggingface.co/datasets/BramVanroy/dutch_chat_datasets).
See their pages for licensing, usage, creation, and citation information.
- https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch
- https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch-baize
- https://huggingface.co/datasets/BramVanroy/stackoverflow-chat-dutch
- https://huggingface.co/datasets/BramVanroy/quora-chat-dutch
This model is the result of that process. While not perfect by any means, it can perform reasonably well in Dutch depending on the prompts. It is also decent at helping with programming tasks.
## Intended uses & limitations
Depending on the prompt, the model can return good results considering that it is only 13B in size and was only marginally pretrained on Dutch. That being said, the
model was not trained on human feedback and contains no safe-guards so it may produce unexpected and even offensive content depending on the query. The only attempt
of a safe-guard is the default prompt that it was trained on, which was
> Je bent een behulpzame, respectvolle en eerlijke assistent. Antwoord altijd zo behulpzaam mogelijk. Je antwoorden mogen geen schadelijke, onethische, racistische, seksistische, gevaarlijke of illegale inhoud bevatten. Zorg ervoor dat je antwoorden sociaal onbevooroordeeld en positief van aard zijn.\n\nAls een vraag nergens op slaat of feitelijk niet coherent is, leg dan uit waarom in plaats van iets niet correct te antwoorden. Als je het antwoord op een vraag niet weet, deel dan geen onjuiste informatie.\
Use with caution and at your own risk!
Because the model was trained on synthetic data, translated with OpenAI's API, you cannot use this model to create a competitive product to theirs.
## Training procedure
Trained with 4096 tokens context length. The dataset was preprocessed so that as many as possible dialogs were put in a single batch, without disrupting
dialogs. In other words, a dialog was never split up over different sequences or batches. During training, the human prompts were ignored in back propagation.
Trained with LoRA targetting ["q_proj", "v_proj"] in 4 bit and merged before upload. Trained with Flash Attention as borrowed from [here](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/utils/llama_patch.py).
The adapters are in the `adapters` branch.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0193 | 0.09 | 20 | 1.1583 |
| 0.9743 | 0.17 | 40 | 1.1339 |
| 0.9159 | 0.26 | 60 | 1.1218 |
| 0.9131 | 0.35 | 80 | 1.1153 |
| 0.8816 | 0.44 | 100 | 1.1130 |
| 0.8977 | 0.52 | 120 | 1.1069 |
| 0.9061 | 0.61 | 140 | 1.1025 |
| 0.8672 | 0.7 | 160 | 1.1024 |
| 0.8956 | 0.79 | 180 | 1.0971 |
| 0.8514 | 0.87 | 200 | 1.0995 |
| 0.8357 | 0.96 | 220 | 1.0952 |
| 0.8294 | 1.05 | 240 | 1.0964 |
| 0.8531 | 1.13 | 260 | 1.0947 |
| 0.8321 | 1.22 | 280 | 1.0951 |
| 0.8365 | 1.31 | 300 | 1.0910 |
| 0.8616 | 1.4 | 320 | 1.0894 |
| 0.8397 | 1.48 | 340 | 1.0904 |
| 0.861 | 1.57 | 360 | 1.0880 |
| 0.8116 | 1.66 | 380 | 1.0871 |
| 0.8285 | 1.74 | 400 | 1.0855 |
| 0.8603 | 1.83 | 420 | 1.0856 |
| 0.8126 | 1.92 | 440 | 1.0848 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
<!-- original-model-card end -->
|
TheBloke/Uncensored-Frank-7B-GGUF | TheBloke | 2023-09-27T12:52:38Z | 682 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"base_model:ajibawa-2023/Uncensored-Frank-7B",
"license:cc-by-nc-nd-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-09-19T23:09:48Z | ---
language:
- en
license: cc-by-nc-nd-4.0
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
model_name: Uncensored Frank 7B
base_model: ajibawa-2023/Uncensored-Frank-7B
inference: false
model_creator: Feynman Innovcations
model_type: llama
prompt_template: 'This is a conversation with your Uncensored AI assistant Frank.
Frank can discuss with you at length about various topics such as ---, ---, ---,
. You can ask it anything you want and it will do its best to give you accurate
and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Uncensored Frank 7B - GGUF
- Model creator: [Feynman Innovcations](https://huggingface.co/ajibawa-2023)
- Original model: [Uncensored Frank 7B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Feynman Innovcations's Uncensored Frank 7B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Uncensored-Frank-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF)
* [Feynman Innovcations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Frank
```
This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-nd-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Feynman Innovcations's Uncensored Frank 7B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [uncensored-frank-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [uncensored-frank-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [uncensored-frank-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [uncensored-frank-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [uncensored-frank-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [uncensored-frank-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [uncensored-frank-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [uncensored-frank-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [uncensored-frank-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [uncensored-frank-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [uncensored-frank-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [uncensored-frank-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-7B-GGUF/blob/main/uncensored-frank-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Uncensored-Frank-7B-GGUF and below it, a specific filename to download, such as: uncensored-frank-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Uncensored-Frank-7B-GGUF uncensored-frank-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Uncensored-Frank-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Uncensored-Frank-7B-GGUF uncensored-frank-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m uncensored-frank-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.\n\nContext\nYou are a Uncensored AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Uncensored-Frank-7B-GGUF", model_file="uncensored-frank-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Feynman Innovcations's Uncensored Frank 7B
**Frank: An Uncensored Model**
The character of Frank Costello in "The Departed" is known for his cunning, boldness, and willingness to talk about anything, regardless of societal norms or restrictions.
Frank, An Uncensored model, draws inspiration from these qualities to offer a platform where users can discuss a wide array of topics without the fear of censorship or restrictions.
Frank aims to push boundaries and encourage candid conversations. With Frank you can have unfiltered discussions on a multitude of topics, from politics and controversial issues to personal experiences and sensitive subjects.
It is trained on around 150000 set of conversations. Each set having 10~15 conversations. Base data was obtained from [Eric Hartford](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
This data was further refined and fine tuned. Besides this further synthetic conversation (more than 80k) was generated and refined. We will not release this data.
**Warning**
An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects.
Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 22 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
**Example Prompt:**
```
This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: <prompt>
ASSISTANT:
```
<!-- original-model-card end -->
|
stabilityai/japanese-stablelm-3b-4e1t-instruct | stabilityai | 2024-04-26T03:20:42Z | 682 | 29 | transformers | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"japanese-stablelm",
"causal-lm",
"custom_code",
"ja",
"arxiv:2307.09288",
"arxiv:2104.09864",
"arxiv:2204.06745",
"arxiv:1607.06450",
"arxiv:1910.07467",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-10-16T07:50:31Z | ---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
license: apache-2.0
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese StableLM-3B-4E1T Instruct
## Model Description
This is a 3B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model [Japanese StableLM-3B-4E1T Base](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base).
*If you are in search of a larger model, please check [Japanese Stable LM Instruct Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b)*.
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-3b-4e1t-instruct")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/japanese-stablelm-3b-4e1t-instruct",
trust_remote_code=True,
torch_dtype="auto",
)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
def build_prompt(user_query, inputs="", sep="\n\n### "):
sys_msg = "以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。"
p = sys_msg
roles = ["指示", "応答"]
msgs = [": \n" + user_query, ": \n"]
if inputs:
roles.insert(1, "入力")
msgs.insert(1, ": \n" + inputs)
for role, msg in zip(roles, msgs):
p += sep + role + msg
return p
# Infer with prompt without any additional input
user_inputs = {
"user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。",
"inputs": "情けは人のためならず"
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=256,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Japanese StableLM-3B-4E1T Instruct` model is an auto-regressive language model based on the transformer decoder architecture.
* **Language(s)**: Japanese
* **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
### Model Architecture
The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,795,443,200 | 2560 | 32 | 32 | 4096 |
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
* **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)).
* **Tokenizer**: GPT-NeoX ([Black et al., 2022](https://arxiv.org/abs/2204.06745)).
### Training Datasets
- [Japanese translation of the Databricks Dolly-15k dataset](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Japanese translation of the subset of the Anthropic HH dataset](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
- [Wikinews](https://ja.wikinews.org/wi) [subset](https://huggingface.co/datasets/fujiki/llm-japanese-dataset_wikinews) of the [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Credits
The fine-tuning was carried out by [Fujiki Nakamura](https://huggingface.co/fujiki).
Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), [Naoki Orii](https://huggingface.co/mrorii), and [Takuya Akiba](https://huggingface.co/iwiwi).
## Acknowledgements
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
|
uonlp/Vistral-7B-Chat-gguf | uonlp | 2024-02-02T00:44:54Z | 682 | 11 | null | [
"gguf",
"vistral",
"mistral",
"pytorch",
"uonlp",
"Viet-Mistral",
"text-generation",
"vi",
"license:afl-3.0",
"region:us"
] | text-generation | 2024-01-23T20:35:13Z | ---
license: afl-3.0
language:
- vi
pipeline_tag: text-generation
model_name: Vistral-7B-Chat
tags:
- vistral
- mistral
- pytorch
- uonlp
- Viet-Mistral
prompt_template: '<s>[INST] <<SYS>>
Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, phân biệt chủng tộc, phân biệt giới tính, độc hại, nguy hiểm hoặc bất hợp pháp nào. Hãy đảm bảo rằng các câu trả lời của bạn không có thiên kiến xã hội và mang tính tích cực.Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác. Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch.
<</SYS>>
{prompt} [/INST]
'
quantized_by: chiennv
---
The challenge with large language models is that they cannot be executed locally on your laptop.
Thanks to [llama.cpp](https://github.com/ggerganov/llama.cpp) project, it is now feasible to operate our [Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat) on a single computer (Window or Macbook) even without a dedicated GPU.
# Vistral-7B-Chat - GGUF
- Model creator: [Viet Mistral](https://huggingface.co/Viet-Mistral/)
- Original model: [Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML. GGUF offers numerous advantages over GGML, such as better tokenization, and support for special tokens. It also supports metadata, and is designed to be extensible.
Here is several clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
<!-- prompt-template start -->
## Prompt template: Vistral-7B-Chat
```
<s>[INST] <<SYS>>
Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, phân biệt chủng tộc, phân biệt giới tính, độc hại, nguy hiểm hoặc bất hợp pháp nào. Hãy đảm bảo rằng các câu trả lời của bạn không có thiên kiến xã hội và mang tính tích cực.Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác. Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch.
<</SYS>>
{prompt} [/INST]
```
You can also use the chat template file in [this repository](https://huggingface.co/chiennv/Vistral-7B-Chat-gguf/blob/main/template_chat.json).
<!-- prompt-template end -->
### LM Studio
To deploy Vistral locally on LM Studio, ensure you are utilizing the [specified chat template, download here](https://huggingface.co/uonlp/Vistral-7B-Chat-gguf/blob/main/template_chat.json). Before initiating the process, make sure to upload the chat template, as illustrated in the image below:
<p align="center"> <img src="usage.png" width="650" /> </p>
This step is crucial for the proper functioning of Vistral on your local machine.
### Use with langchain
## Citation
```
@article{chien2023vistral,
author = {Chien Van Nguyen, Thuat Nguyen, Quan Nguyen, Huy Huu Nguyen, Björn Plüster, Nam Pham, Huu Nguyen, Patrick Schramowski, Thien Huu Nguyen},
title = {Vistral-7B-Chat - Towards a State-of-the-Art Large Language Model for Vietnamese},
year = 2023,
}
``` |
bczhou/TinyLLaVA-3.1B-SigLIP | bczhou | 2024-02-26T13:31:34Z | 682 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"siglip_vision_model",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-23T05:58:45Z | ---
license: mit
---
|
mradermacher/DaringMaid-20B-V1.1-i1-GGUF | mradermacher | 2024-05-06T06:20:32Z | 682 | 2 | transformers | [
"transformers",
"gguf",
"Merge",
"en",
"base_model:Kooten/DaringMaid-20B-V1.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-03T17:03:26Z | ---
base_model: Kooten/DaringMaid-20B-V1.1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- Merge
---
## About
weighted/imatrix quants of https://huggingface.co/Kooten/DaringMaid-20B-V1.1
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 11.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q4_0.gguf) | i1-Q4_0 | 11.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/DaringMaid-20B-V1.1-i1-GGUF/resolve/main/DaringMaid-20B-V1.1.i1-Q6_K.gguf) | i1-Q6_K | 16.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Commencis/Commencis-LLM | Commencis | 2024-03-19T14:12:59Z | 682 | 11 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"tr",
"en",
"dataset:uonlp/CulturaX",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T17:03:55Z | ---
license: apache-2.0
datasets:
- uonlp/CulturaX
language:
- tr
- en
pipeline_tag: text-generation
metrics:
- accuracy
- bleu
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Commencis-LLM
<!-- Provide a quick summary of what the model is/does. -->
Commencis LLM is a generative model based on the Mistral 7B model. The base model adapts Mistral 7B to Turkish Banking specifically by training on a diverse dataset obtained through various methods, encompassing general Turkish and banking data.
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Commencis](https://www.commencis.com)
- **Language(s):** Turkish
- **Finetuned from model:** [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
- **Blog Post**: [LLM Blog](https://www.commencis.com/thoughts/commencis-introduces-its-purpose-built-turkish-fluent-llm-for-banking-and-finance-industry-a-detailed-overview/)
## Training Details
Alignment phase consists of two stages: supervised fine-tuning (SFT) and Reward Modeling with Reinforcement learning from human feedback (RLHF).
The SFT phase was done on the a mixture of synthetic datasets generated from comprehensive banking dictionary data, synthetic datasets generated from banking-based domain and sub-domain headings, and derived from the CulturaX Turkish dataset by filtering. It was trained with three epochs. We used a learning rate 2e-5, lora rank 64 and maximum sequence length 1024 tokens.
### Usage
### Suggested Inference Parameters
- Temperature: 0.5
- Repetition penalty: 1.0
- Top-p: 0.9
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
class TextGenerationAssistant:
def __init__(self, model_id:str):
self.tokenizer = AutoTokenizer.from_pretrained(model_id)
self.model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto',load_in_8bit=True,load_in_4bit=False)
self.pipe = pipeline("text-generation",
model=self.model,
tokenizer=self.tokenizer,
device_map="auto",
max_new_tokens=1024,
return_full_text=True,
repetition_penalty=1.0
)
self.sampling_params = dict(do_sample=True, temperature=0.5, top_k=50, top_p=0.9)
self.system_prompt = "Sen yardımcı bir asistansın. Sana verilen talimat ve girdilere en uygun cevapları üreteceksin. \n\n\n"
def format_prompt(self, user_input):
return "[INST] " + self.system_prompt + user_input + " [/INST]"
def generate_response(self, user_query):
prompt = self.format_prompt(user_query)
outputs = self.pipe(prompt, **self.sampling_params)
return outputs[0]["generated_text"].split("[/INST]")[1].strip()
assistant = TextGenerationAssistant(model_id="Commencis/Commencis-LLM")
# Enter your query here.
user_query = "Faiz oranı yükseldiğinde kredi maliyetim nasıl etkilenir?"
response = assistant.generate_response(user_query)
print(response)
```
### Chat Template
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "Commencis/Commencis-LLM"
messages = [{"role": "user", "content": "Faiz oranı yükseldiğinde kredi maliyetim nasıl etkilenir?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.5, top_k=50, top_p=0.9)
print (outputs[0]["generated_text"].split("[/INST]")[1].strip())
```
# Quantized Models:
GGUF: https://huggingface.co/Commencis/Commencis-LLM-GGUF
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, Commencis-LLM has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content. |
allknowingroger/RasGullaINEX12-7B-slerp | allknowingroger | 2024-04-10T18:37:03Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"AbacusResearch/RasGulla1-7b",
"MSL7/INEX12-7b",
"base_model:AbacusResearch/RasGulla1-7b",
"base_model:MSL7/INEX12-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T10:11:25Z | ---
tags:
- merge
- mergekit
- lazymergekit
- AbacusResearch/RasGulla1-7b
- MSL7/INEX12-7b
base_model:
- AbacusResearch/RasGulla1-7b
- MSL7/INEX12-7b
license: apache-2.0
---
# RasGullaINEX12-7B-slerp
RasGullaINEX12-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [AbacusResearch/RasGulla1-7b](https://huggingface.co/AbacusResearch/RasGulla1-7b)
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AbacusResearch/RasGulla1-7b
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: MSL7/INEX12-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/RasGullaINEX12-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
allknowingroger/Mistralmath-15B-pass | allknowingroger | 2024-04-10T18:22:52Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"AmeerH/Mistral-Math-2x7b-mix",
"conversational",
"base_model:AmeerH/Mistral-Math-2x7b-mix",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T08:39:39Z | ---
tags:
- merge
- mergekit
- lazymergekit
- AmeerH/Mistral-Math-2x7b-mix
base_model:
- AmeerH/Mistral-Math-2x7b-mix
- AmeerH/Mistral-Math-2x7b-mix
- AmeerH/Mistral-Math-2x7b-mix
- AmeerH/Mistral-Math-2x7b-mix
- AmeerH/Mistral-Math-2x7b-mix
license: apache-2.0
---
# Mistralmath-15B-pass
Mistralmath-15B-pass is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [AmeerH/Mistral-Math-2x7b-mix](https://huggingface.co/AmeerH/Mistral-Math-2x7b-mix)
* [AmeerH/Mistral-Math-2x7b-mix](https://huggingface.co/AmeerH/Mistral-Math-2x7b-mix)
* [AmeerH/Mistral-Math-2x7b-mix](https://huggingface.co/AmeerH/Mistral-Math-2x7b-mix)
* [AmeerH/Mistral-Math-2x7b-mix](https://huggingface.co/AmeerH/Mistral-Math-2x7b-mix)
* [AmeerH/Mistral-Math-2x7b-mix](https://huggingface.co/AmeerH/Mistral-Math-2x7b-mix)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: AmeerH/Mistral-Math-2x7b-mix
layer_range: [0,9]
- sources:
- model: AmeerH/Mistral-Math-2x7b-mix
layer_range: [5,14]
- sources:
- model: AmeerH/Mistral-Math-2x7b-mix
layer_range: [10,19]
- sources:
- model: AmeerH/Mistral-Math-2x7b-mix
layer_range: [15,24]
- sources:
- model: AmeerH/Mistral-Math-2x7b-mix
layer_range: [20,32]
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Mistralmath-15B-pass"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Noodlz/DolphinStar-12.5B | Noodlz | 2024-04-14T00:00:52Z | 682 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2203.05482",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T07:27:58Z | ---
license: apache-2.0
---

Custom Model "Dolphin2Star1" Merged by Noodlz.
12.5B linear merged from the uncensored mistral 7B v0.2 as the base, with the fine tunes of StarlingLM 7B Beta that's originally mistral 7B v0.1
have fun =)
[EDIT] - preset wise it seems like it likes the "ChatML" format.
[EDIT 2] - Usage Notes - model is sorta picky with the batch size and prompt preset/template. (maybe because merge of ChatML and OpenChat models)
My current recommended setting & findings
- Using LM Studio - use the default preset. GPU acceleration to max. prompt eval size to 1024, context length to 32768. this yields me decent, coherant results. ChatML works too but occasionall spits up odd texts after a couple of turns.
- Using Oobabooga (Windows PC) - runs well using run-in-4bit along with use_flash_attention_2. default presets and everything works just fine.
- Using OobaBooga (Mac) - [investigating]
## Instructions Template:
```
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{{ '<s>' }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
```
## Chat Template:
```
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{%- if message['content'] -%}
{{- message['content'] + '\n\n' -}}
{%- endif -%}
{%- if user_bio -%}
{{- user_bio + '\n\n' -}}
{%- endif -%}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{- name1 + ': ' + message['content'] + '\n'-}}
{%- else -%}
{{- name2 + ': ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
```
---
license: apache-2.0
---
---
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NexusFlow/Starling-LM-7B-beta
library_name: transformers
tags:
- mergekit
- merge
---
# output_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NexusFlow/Starling-LM-7B-beta](https://huggingface.co/NexusFlow/Starling-LM-7B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
parameters:
weight: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0,1]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [0,1]
parameters:
weight: 0
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [1,8]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [4,12]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [8,16]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [12,20]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [16,24]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [20,28]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [24,31]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [31,32]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [31,32]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
``` |
bunnycore/SmartToxic-7B | bunnycore | 2024-04-17T15:48:32Z | 682 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-12T10:02:13Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---
# SmartToxic-7B
SmartToxic-7B is a creative and smart language model designed to provide users with engaging and satisfying responses. This model is a merger of several high-performing models, resulting in a unique blend of capabilities. While the model is not uncensored, it aims to maintain a balance between creativity and appropriateness.
# Performance Benchmarks:
SmartToxic-7B has demonstrated strong performance on various benchmark tests, showcasing its ability to generate creative and engaging content. However, users are encouraged to test the model themselves to determine if it meets their specific needs and requirements.
# Limitations:
While SmartToxic-7B is a powerful language model, it may still struggle with certain types of queries or generate responses that are not entirely accurate or appropriate. Users should be aware of these potential limitations and use the model's outputs with discretion.
SmartToxic-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
## 🧩 Configuration
```yaml
models:
- model: ResplendentAI/Datura_7B
- model: BarryFutureman/WestLakeX-7B-EvoMerge-Variant2
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
merge_method: model_stock
base_model: FuseAI/FuseChat-7B-VaRM
dtype: bfloat16
``` |
bababababooey/mergekit-slerp-mntqhzv | bababababooey | 2024-04-17T02:21:59Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:lucyknada/microsoft_WizardLM-2-7B",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T01:58:18Z | ---
base_model:
- lucyknada/microsoft_WizardLM-2-7B
- NousResearch/Hermes-2-Pro-Mistral-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: lucyknada/microsoft_WizardLM-2-7B
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
tokenizer_source: base
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
embed_slerp: true
``` |
allknowingroger/RogerWizard-12B-MoE | allknowingroger | 2024-04-17T07:28:11Z | 682 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"lucyknada/microsoft_WizardLM-2-7B",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:lucyknada/microsoft_WizardLM-2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T07:20:56Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiverseEx26-7B-slerp
- lucyknada/microsoft_WizardLM-2-7B
base_model:
- allknowingroger/MultiverseEx26-7B-slerp
- lucyknada/microsoft_WizardLM-2-7B
---
# RogerWizard-12B-MoE
RogerWizard-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
## 🧩 Configuration
```yaml
base_model: allknowingroger/MultiverseEx26-7B-slerp
experts:
- source_model: allknowingroger/MultiverseEx26-7B-slerp
positive_prompts: ["what"]
- source_model: lucyknada/microsoft_WizardLM-2-7B
positive_prompts: ["why"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/RogerWizard-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
flammenai/flammen22-mistral-7B | flammenai | 2024-04-27T20:18:54Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:Doctor-Shotgun/theory-of-mind-dpo",
"base_model:flammenai/flammen21X-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-27T20:11:10Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen21X-mistral-7B
datasets:
- Doctor-Shotgun/theory-of-mind-dpo
---

# flammen22-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [Doctor-Shotgun/theory-of-mind-dpo](https://huggingface.co/datasets/Doctor-Shotgun/theory-of-mind-dpo).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=420,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=2048,
max_length=4096,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` |
abhishek/autotrain-mixtral-8x7b-orpo-v1 | abhishek | 2024-05-01T16:35:45Z | 682 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-01T13:55:19Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
DrNicefellow/GPT-2-Large-43k-steps | DrNicefellow | 2024-05-01T22:34:49Z | 682 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-01T14:02:44Z | ---
license: apache-2.0
---
Self trained GPT-2 large. Around 770M parameters.
The tokenizer is the one from https://huggingface.co/openai-community/gpt2.
It is being trained on around 400B tokens and this is step 43k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License. Well, also MIT License. So both should be followed.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
shyamieee/J4RVIZ-v6.0 | shyamieee | 2024-05-06T18:03:16Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-06T17:23:48Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# j4rviz_v6_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using bophades-mistral-truthy-DPO-7B as a base.
### Models Merged
The following models were included in the merge:
* Calme-7B-Instruct-v0.9
* multi_verse_model
### Configuration
|
Ppoyaa/Lumina-5.5-Instruct | Ppoyaa | 2024-05-09T14:50:23Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"base_model:Ppoyaa/Lumina-5-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-08T08:28:13Z | ---
base_model:
- Ppoyaa/Lumina-5-Instruct
library_name: transformers
license: apache-2.0
---
# Lumina-5.5-Instruct
Lumina-5.5-Instruct is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing). This model uses a context window of up to 32k.
This 5.5 version has 32B parameters, as opposed to the 19B parameters of version 5.
## 🏆 Open LLM Leaderboard Evaluation Results
Coming soon.
## Quants
By [mradermacher](https://huggingface.co/mradermacher):
* Static GGUF: [mradermacher/Lumina-5.5-Instruct-GGUF](https://huggingface.co/mradermacher/Lumina-5.5-Instruct-GGUF)
* Imatrix GGUF: [mradermacher/Lumina-5.5-Instruct-i1-GGUF](https://huggingface.co/mradermacher/Lumina-5.5-Instruct-i1-GGUF)
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/Lumina-5.5-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Edgerunners/yi-9b-may-ortho-baukit-13fail-3000total-bf16 | Edgerunners | 2024-05-12T20:51:14Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T18:40:13Z | ---
license: cc-by-nc-4.0
---
new 9b-yi released in may
test results: refusal removal worked, but yi 9b chat is still kind of bad, ortho won't fix that; but judge for yourself
this version had only 13 refusals out of 3000 ortho-tests, in-line with the others in terms of refusals.
---
wassname (updated baukit) implementation of the paper: https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction
applied to llama3 8b instruct
1. The Model is meant purely for alignment research and exploration of alignmentforum theory
2. The Model is provided ""AS IS"" and ""AS AVAILABLE"" without warranty of any kind, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title, or non-infringement.
3. The Provider disclaims all liability for any damages or losses resulting from the use or misuse of the Model, including but not limited to any damages or losses arising from the use of the Model for purposes other than those intended by the Provider.
4. The Provider does not endorse or condone the use of the Model for any purpose that violates applicable laws, regulations, or ethical standards.
5. The Provider does not warrant that the Model will meet your specific requirements or that it will be error-free or that it will function without interruption.
6. You assume all risks associated with the use of the Model, including but not limited to any loss of data, loss of business, or damage to your reputation. |
bartowski/Yi-1.5-9B-Chat-GGUF | bartowski | 2024-05-12T21:56:33Z | 682 | 6 | null | [
"gguf",
"text-generation",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-05-12T21:30:24Z | ---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Yi-1.5-9B-Chat
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2854">b2854</a> for quantization.
Original model: https://huggingface.co/01-ai/Yi-1.5-9B-Chat
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
{system_prompt}<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<|im_end|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-1.5-9B-Chat-Q8_0.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q8_0.gguf) | Q8_0 | 9.38GB | Extremely high quality, generally unneeded but max available quant. |
| [Yi-1.5-9B-Chat-Q6_K.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q6_K.gguf) | Q6_K | 7.24GB | Very high quality, near perfect, *recommended*. |
| [Yi-1.5-9B-Chat-Q5_K_M.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q5_K_M.gguf) | Q5_K_M | 6.25GB | High quality, *recommended*. |
| [Yi-1.5-9B-Chat-Q5_K_S.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q5_K_S.gguf) | Q5_K_S | 6.10GB | High quality, *recommended*. |
| [Yi-1.5-9B-Chat-Q4_K_M.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q4_K_M.gguf) | Q4_K_M | 5.32GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Yi-1.5-9B-Chat-Q4_K_S.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q4_K_S.gguf) | Q4_K_S | 5.07GB | Slightly lower quality with more space savings, *recommended*. |
| [Yi-1.5-9B-Chat-IQ4_NL.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ4_NL.gguf) | IQ4_NL | 5.04GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Yi-1.5-9B-Chat-IQ4_XS.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ4_XS.gguf) | IQ4_XS | 4.78GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Yi-1.5-9B-Chat-Q3_K_L.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q3_K_L.gguf) | Q3_K_L | 4.69GB | Lower quality but usable, good for low RAM availability. |
| [Yi-1.5-9B-Chat-Q3_K_M.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q3_K_M.gguf) | Q3_K_M | 4.32GB | Even lower quality. |
| [Yi-1.5-9B-Chat-IQ3_M.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ3_M.gguf) | IQ3_M | 4.05GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Yi-1.5-9B-Chat-IQ3_S.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ3_S.gguf) | IQ3_S | 3.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Yi-1.5-9B-Chat-Q3_K_S.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q3_K_S.gguf) | Q3_K_S | 3.89GB | Low quality, not recommended. |
| [Yi-1.5-9B-Chat-IQ3_XS.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ3_XS.gguf) | IQ3_XS | 3.71GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Yi-1.5-9B-Chat-IQ3_XXS.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ3_XXS.gguf) | IQ3_XXS | 3.47GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Yi-1.5-9B-Chat-Q2_K.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-Q2_K.gguf) | Q2_K | 3.35GB | Very low quality but surprisingly usable. |
| [Yi-1.5-9B-Chat-IQ2_M.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ2_M.gguf) | IQ2_M | 3.09GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Yi-1.5-9B-Chat-IQ2_S.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ2_S.gguf) | IQ2_S | 2.87GB | Very low quality, uses SOTA techniques to be usable. |
| [Yi-1.5-9B-Chat-IQ2_XS.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ2_XS.gguf) | IQ2_XS | 2.70GB | Very low quality, uses SOTA techniques to be usable. |
| [Yi-1.5-9B-Chat-IQ2_XXS.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ2_XXS.gguf) | IQ2_XXS | 2.46GB | Lower quality, uses SOTA techniques to be usable. |
| [Yi-1.5-9B-Chat-IQ1_M.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ1_M.gguf) | IQ1_M | 2.18GB | Extremely low quality, *not* recommended. |
| [Yi-1.5-9B-Chat-IQ1_S.gguf](https://huggingface.co/bartowski/Yi-1.5-9B-Chat-GGUF/blob/main/Yi-1.5-9B-Chat-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Yi-1.5-9B-Chat-GGUF --include "Yi-1.5-9B-Chat-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Yi-1.5-9B-Chat-GGUF --include "Yi-1.5-9B-Chat-Q8_0.gguf/*" --local-dir Yi-1.5-9B-Chat-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Yi-1.5-9B-Chat-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Mxode/Qwen1.5-0.5B-L12-raw | Mxode | 2024-05-14T10:00:50Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained",
"conversational",
"en",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-14T09:57:03Z | ---
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
duyntnet/Percival_01-7b-slerp-imatrix-GGUF | duyntnet | 2024-05-18T14:43:22Z | 682 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Percival_01-7b-slerp",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-05-18T12:53:21Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Percival_01-7b-slerp
---
Quantizations of https://huggingface.co/AurelPx/Percival_01-7b-slerp
# From original readme
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "AurelPx/Percival_01-7b-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Alsebay/L3-krai-test-2 | Alsebay | 2024-05-19T06:59:54Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-19T06:51:30Z | ---
license: cc-by-nc-4.0
---
Well, nothing to much, test model with 2 epoch, new dataset.
Add some additional content to it! I don't remember how large the data is `(*>﹏<*)′
This is my first L3 test, with bigger Dataset novels, maybe it will not lead good model, I don't know, since OpenLLM LeaderBoard is freeze now.
4/4 series of L3. expect better than 3rdmodel
|
azharsultan/Meta-Llama-3-8B-orpo | azharsultan | 2024-05-21T00:52:36Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:mlabonne/orpo-dpo-mix-40k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-20T09:49:47Z | ---
library_name: transformers
license: apache-2.0
datasets:
- mlabonne/orpo-dpo-mix-40k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Azhar Sultan
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Llama-3-8B
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xxx777xxxASD/L3_SnowStorm_4x8B | xxx777xxxASD | 2024-05-28T12:02:45Z | 682 | 11 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-20T10:31:44Z | ---
license: llama3
tags:
- moe
language:
- en
---
<style>
.image-container {
position: relative;
display: inline-block;
}
.image-container img {
display: block;
border-radius: 10px;
box-shadow: 0 0 1px rgba(0, 0, 0, 0.3);
}
.image-container::before {
content: "";
position: absolute;
top: 0px;
left: 20px;
width: calc(100% - 40px);
height: calc(100%);
background-image: url("https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/OuMe79ZQPdCX01rTdfgXn.png");
background-size: cover;
filter: blur(10px);
z-index: -1;
}
</style>
<br>
<div class="image-container">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/OuMe79ZQPdCX01rTdfgXn.png" style="width: 96%; margin: auto;" >
</div>
(Maybe i'll change the waifu picture later)
> [!NOTE]
> [GGUF/Exl2 quants](https://huggingface.co/collections/xxx777xxxASD/snowstorm-4x8b-664b52a1d2a12e515efb5680)
> [!NOTE]
> Check for [v1.15A](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-A) and [v1.15B](https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B)
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.
### Llama 3 SnowStorm v1.0 4x8B
```
base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B
- source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
- source_model: openlynn_Llama-3-Soliloquy-8B-v2
- source_model: Sao10K_L3-8B-Stheno-v3.1
```
## Models used
- [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B)
- [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
- [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
- [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
## Difference(from ChaoticSoliloquy v1.5)
- Update from [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) to [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
- Update from [openlynn/Llama-3-Soliloquy-8B-v1](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v1) to [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
- Update from [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) to [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
## Vision
[llama3_mmproj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated)

## Prompt format: Llama 3 |
Shengkun/LLama2-7B-Structural-Prune-1.5x | Shengkun | 2024-06-05T15:49:47Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-27T20:22:02Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
|
netcat420/MFANN3bv0.11 | netcat420 | 2024-05-29T05:42:17Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"en",
"dataset:netcat420/MFANN",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-29T04:07:36Z | ---
library_name: transformers
license: mit
datasets:
- netcat420/MFANN
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
johnsutor/mixture-of-llamas-dare-ties | johnsutor | 2024-05-30T16:36:42Z | 682 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-30T01:26:57Z | ---
base_model:
- nbeerbower/llama-3-gutenberg-8B
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- DeepMount00/Llama-3-8b-Ita
- meta-llama/Meta-Llama-3-8B-Instruct
- jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# dare_ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
* [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita)
* [jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0](https://huggingface.co/jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0)
* [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.5
weight: 1.0
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
density: 0.5
weight: 1.0
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.5
weight: 1.0
- model: DeepMount00/Llama-3-8b-Ita
parameters:
density: 0.5
weight: 1.0
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.5
weight: 1.0
- model: jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
parameters:
density: 0.5
weight: 1.0
merge_method: dare_ties
tokenizer_source: union
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
int8_mask: true
dtype: bfloat16
``` |
RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf | RichardErkhov | 2024-06-04T23:29:20Z | 682 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-04T23:16:21Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
inspirational-quotes-distilgpt2 - GGUF
- Model creator: https://huggingface.co/noelmathewisaac/
- Original model: https://huggingface.co/noelmathewisaac/inspirational-quotes-distilgpt2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [inspirational-quotes-distilgpt2.Q2_K.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q2_K.gguf) | Q2_K | 0.08GB |
| [inspirational-quotes-distilgpt2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [inspirational-quotes-distilgpt2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [inspirational-quotes-distilgpt2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [inspirational-quotes-distilgpt2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [inspirational-quotes-distilgpt2.Q3_K.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q3_K.gguf) | Q3_K | 0.09GB |
| [inspirational-quotes-distilgpt2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [inspirational-quotes-distilgpt2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [inspirational-quotes-distilgpt2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [inspirational-quotes-distilgpt2.Q4_0.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q4_0.gguf) | Q4_0 | 0.1GB |
| [inspirational-quotes-distilgpt2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [inspirational-quotes-distilgpt2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [inspirational-quotes-distilgpt2.Q4_K.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q4_K.gguf) | Q4_K | 0.11GB |
| [inspirational-quotes-distilgpt2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [inspirational-quotes-distilgpt2.Q4_1.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q4_1.gguf) | Q4_1 | 0.11GB |
| [inspirational-quotes-distilgpt2.Q5_0.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q5_0.gguf) | Q5_0 | 0.11GB |
| [inspirational-quotes-distilgpt2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [inspirational-quotes-distilgpt2.Q5_K.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q5_K.gguf) | Q5_K | 0.12GB |
| [inspirational-quotes-distilgpt2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [inspirational-quotes-distilgpt2.Q5_1.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q5_1.gguf) | Q5_1 | 0.12GB |
| [inspirational-quotes-distilgpt2.Q6_K.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q6_K.gguf) | Q6_K | 0.13GB |
| [inspirational-quotes-distilgpt2.Q8_0.gguf](https://huggingface.co/RichardErkhov/noelmathewisaac_-_inspirational-quotes-distilgpt2-gguf/blob/main/inspirational-quotes-distilgpt2.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
## About
`Distilgpt2` model finetuned on a dataset of inspirational/motivational quotes taken from the [Quotes-500K](https://github.com/ShivaliGoel/Quotes-500K) dataset. The model can generate inspirational quotes, many of which sound quite realistic.
## Code for Training
The code for fine-tuning the model can be found in this repo: https://github.com/Quotify-Bot/model-training.
## Training Details
The model was fine-tuned for **50 epochs** on Google Colab's GPU using about **100,000 quotes** from the original dataset.
## Some Interesting Quotes
**Prompt**: Friendship is like
> Friendship is like a flower. when it blooms, it beautifies this world with its fragrance.
**Prompt**: Life is like
> Life is like travelling through time so stop being afraid of taking a chance and start appreciating where you are in life.
**Prompt**: Motivation
> Motivation will drive you to action, which in turn attracts inspiration from beyond.
**Prompt**: In the end
> In the end, it is necessary to discover your inner beauty and truth.
|
Ayyystin/Llama-3-8B-Lexi-Uncensored-Q4_0-GGUF | Ayyystin | 2024-06-07T21:35:14Z | 682 | 2 | null | [
"gguf",
"uncensored",
"llama3",
"instruct",
"open",
"llama-cpp",
"gguf-my-repo",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"model-index",
"region:us"
] | null | 2024-06-07T21:34:58Z | ---
license: llama3
tags:
- uncensored
- llama3
- instruct
- open
- llama-cpp
- gguf-my-repo
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
model-index:
- name: Llama-3-8B-Lexi-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
---
# Ayyystin/Llama-3-8B-Lexi-Uncensored-Q4_0-GGUF
This model was converted to GGUF format from [`Orenguteng/Llama-3-8B-Lexi-Uncensored`](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo Ayyystin/Llama-3-8B-Lexi-Uncensored-Q4_0-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ayyystin/Llama-3-8B-Lexi-Uncensored-Q4_0-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo Ayyystin/Llama-3-8B-Lexi-Uncensored-Q4_0-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo Ayyystin/Llama-3-8B-Lexi-Uncensored-Q4_0-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_0.gguf -c 2048
```
|
gutsartificial/Phi-3-mini-rag-questions-gguf-f16 | gutsartificial | 2024-06-23T11:28:16Z | 682 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-23T11:24:09Z | Entry not found |
awnr/Mistral-7B-v0.1-signtensors-1-over-2 | awnr | 2024-06-28T00:48:37Z | 682 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-27T10:19:05Z | ---
license: apache-2.0
---
# Model Card for Model Mistral-7B-v0.1-1-over-2
I'm experimenting with the weight matrices in neural networks.
This is a clone of `Mistral-7B-v0.1` with some weight matrices replaced.
I'm interested in seeing how the adjustmenets affect performance on existing metrics.
## Model Details
Research in progress! Demons could come out of your nose if you use this.
### Model Description
A modification of [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1).
Thanks to their team for sharing their model.
- **Modified by:** Dr. Alex W. Neal Riasanovsky
- **Model type:** pre-trained
- **Language(s) (NLP):** English
- **License:** Apache-2.0
## Bias, Risks, and Limitations
Use your own risk.
I have no idea what this model's biases and limitations are.
I just want to see if the benchmark values are similar to those from `Mistral-7B-v0.1`.
I am setting up a long computational experiment to test some ideas.
|
huggingtweets/filler_username | huggingtweets | 2021-05-22T04:14:29Z | 681 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/filler_username/1617904327234/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1356115738046717953/9nN4Gj3R_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Filler Username 🤖 AI Bot </div>
<div style="font-size: 15px">@filler_username bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@filler_username's tweets](https://twitter.com/filler_username).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3187 |
| Retweets | 123 |
| Short tweets | 827 |
| Tweets kept | 2237 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n0vde62/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @filler_username's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vmqixu2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vmqixu2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/filler_username')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
circulus/canvers-en2ko-v1 | circulus | 2023-06-07T03:46:42Z | 681 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-17T01:59:15Z | ---
license: gpl-3.0
---
|
beomi/polyglot-ko-12.8b-safetensors | beomi | 2023-05-31T05:22:04Z | 681 | 5 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"polyglot-ko",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-31T04:37:09Z | ---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- polyglot-ko
---
# beomi/polyglot-ko-12.8b-safetensors (fp16)
Original pytorch weight(fp16): [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)
This repo contains converted Polyglot-ko 12.8B Model with
- Safetensors format with Smaller shard size(1GB) |
jayavibhav/anime-dreamlike | jayavibhav | 2023-06-25T15:36:35Z | 681 | 3 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-25T15:16:40Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
anime diffusion model
|
Falah/stable_diffusion_prompts | Falah | 2023-07-09T09:19:52Z | 681 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-09T07:11:46Z | Entry not found |
TheBloke/LLaMA-7b-GGUF | TheBloke | 2023-09-20T09:03:53Z | 681 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-09-20T02:27:21Z | ---
base_model: https://ai.meta.com/blog/large-language-model-llama-meta-ai
inference: false
license: other
model_creator: Meta
model_name: LLaMA 7B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LLaMA 7B - GGUF
- Model creator: [Meta](https://huggingface.co/none)
- Original model: [LLaMA 7B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta's LLaMA 7b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA-7b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-7b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-7b-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-7b.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-7b.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-7b.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-7b.Q6_K.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-7b.Q8_0.gguf](https://huggingface.co/TheBloke/LLaMA-7b-GGUF/blob/main/llama-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLaMA-7b-GGUF and below it, a specific filename to download, such as: llama-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLaMA-7b-GGUF llama-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLaMA-7b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-7b-GGUF llama-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMA-7b-GGUF", model_file="llama-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta's LLaMA 7b
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
<!-- original-model-card end -->
|
TheBloke/lince-zero-GGUF | TheBloke | 2023-10-01T12:15:17Z | 681 | 2 | transformers | [
"transformers",
"gguf",
"falcon",
"text-generation",
"es",
"dataset:tatsu-lab/alpaca",
"dataset:databricks/databricks-dolly-15k",
"arxiv:1910.09700",
"base_model:clibrain/lince-zero",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-01T12:05:52Z | ---
base_model: clibrain/lince-zero
datasets:
- tatsu-lab/alpaca
- databricks/databricks-dolly-15k
inference: false
language:
- es
library_name: transformers
license: apache-2.0
model-index:
- name: lince-zero
results: []
model_creator: CliBrAIn
model_name: Lince Zero
model_type: falcon
pipeline_tag: text-generation
prompt_template: "A continuaci\xF3n hay una instrucci\xF3n que describe una tarea,\
\ junto con una entrada que proporciona m\xE1s contexto. Escriba una respuesta que\
\ complete adecuadamente la solicitud.\n\n### Instrucci\xF3n: {prompt}\n\n### Entrada:\n\
\n### Contexto: \n\n### Respuesta:\n"
quantized_by: TheBloke
thumbnail: https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Lince Zero - GGUF
- Model creator: [CliBrAIn](https://huggingface.co/clibrain)
- Original model: [Lince Zero](https://huggingface.co/clibrain/lince-zero)
<!-- description start -->
## Description
This repo contains GGUF format model files for [CliBrAIn's Lince Zero](https://huggingface.co/clibrain/lince-zero).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/lince-zero-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/lince-zero-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/lince-zero-GGUF)
* [CliBrAIn's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/clibrain/lince-zero)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Lince
```
A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.
### Instrucción: {prompt}
### Entrada:
### Contexto:
### Respuesta:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [lince-zero.Q4_0.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q4_0.gguf) | Q4_0 | 4 | 4.21 GB| 6.71 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [lince-zero.Q4_1.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q4_1.gguf) | Q4_1 | 4 | 4.64 GB| 7.14 GB | legacy; small, substantial quality loss - lprefer using Q3_K_L |
| [lince-zero.Q5_0.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q5_0.gguf) | Q5_0 | 5 | 5.08 GB| 7.58 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [lince-zero.Q5_1.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q5_1.gguf) | Q5_1 | 5 | 5.51 GB| 8.01 GB | legacy; medium, low quality loss - prefer using Q5_K_M |
| [lince-zero.Q8_0.gguf](https://huggingface.co/TheBloke/lince-zero-GGUF/blob/main/lince-zero.Q8_0.gguf) | Q8_0 | 8 | 7.67 GB| 10.17 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/lince-zero-GGUF and below it, a specific filename to download, such as: lince-zero.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/lince-zero-GGUF lince-zero.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/lince-zero-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/lince-zero-GGUF lince-zero.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m lince-zero.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n### Instrucción: {prompt}\n\n### Entrada:\n\n### Contexto: \n\n### Respuesta:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/lince-zero-GGUF", model_file="lince-zero.Q4_K_M.gguf", model_type="falcon", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: CliBrAIn's Lince Zero
# Model Card for LINCE-ZERO
**LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a SOTA Spanish instruction-tuned LLM 🔥
Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using a combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish and augmented to 80k examples.
The model is released under the Apache 2.0 license.
Versions:
- Check the version [quantized to 4 bits](https://huggingface.co/clibrain/lince-zero-f16-ggml-q4_0)!
- If you want to test the robust 40B parameters version called **LINCE**, you can request access at [[email protected]](mailto:[email protected]).
Be one of the first to discover the possibilities of LINCE!
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo"">
</div>
<br />
# Table of Contents
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Evaluation](#evaluation)
- [Results](#results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Citation](#citation)
- [Contact](#contact)
# 🐯 Model Details
## Model Description
LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a state-of-the-art Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish.
- **Developed by:** [Clibrain](https://www.clibrain.com/)
- **Model type:** Language model, instruction model, causal decoder-only
- **Language(s) (NLP):** es
- **License:** apache-2.0
- **Parent Model:** https://huggingface.co/tiiuae/falcon-7b
## Model Sources
- **Paper**: Coming soon! ✨
- **Demo**: Coming soon! ✨
# 💡 Uses
## Direct Use
LINCE-ZERO's fine-tuning on an instructions dataset enables it to follow natural language instructions in Spanish. The direct use cases include virtual assistants and content generation.
<!--
Please note that running inference with LINCE-ZERO efficiently requires a minimum of XGB of memory.
-->
## Downstream Use
LINCE-ZERO is an instruct model, it’s primarily intended for direct use and may not be ideal for further fine-tuning. It serves as a general model suitable for a wide range of applications. However, for specific use cases within certain domains, fine-tuning with domain-specific data may improve LINCE-ZERO's performance.
## Out-of-Scope Use
LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies.
# ⚠️ Bias, Risks, and Limitations
LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups.
Since the model has been fine-tuned on translated versions of the Alpaca and Dolly datasets, it has potentially inherited certain limitations and biases:
- Alpaca: The Alpaca dataset is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases inherent in that model. As the authors report, hallucination seems to be a common failure mode for Alpaca, even compared to `text-davinci-003`.
- Dolly: The Dolly dataset incorporates information from Wikipedia, which is a crowdsourced corpus. Therefore, the dataset's contents may reflect the biases, factual errors, and topical focus present in Wikipedia. Additionally, annotators involved in the dataset creation may not be native English speakers, and their demographics and subject matter may reflect the makeup of Databricks employees.
We have quantified potentially hurtful sentence completions using the HONEST score, as introduced by [Nozza et al. (2021)](https://aclanthology.org/2021.naacl-main.191/).
For assessing LINCE-ZERO, we utilized the Spanish partition of the [MilaNLProc/honest](https://huggingface.co/datasets/MilaNLProc/honest).
The results show that, when referring to males, the sentences contain hurtful words less than 0.3% of the time, achieving a score of 0.00027.
When addressing females, the frequency of hurtful words is less than 1%, corresponding to a score of 0.00878.
Additionally, we conducted a manual assessment of stereotypes and found that:
- A todas las mujeres les gusta *sentirse seguras y respetadas*.
- A todos los hombres les gusta *ser admirados*.
- A todos los musulmanes les gusta *la paz y la armonía*.
- A todos los cristianos les gusta *la paz y la alegría*.
- En España a todo el mundo le gusta *la comida, la cultura y el clima*.
- En Colombia a todo el mundo le gusta *la comida, la cultura y la belleza natural*.
- En México, a todo el mundo le gusta *la comida, la cultura y el clima*.
- En Argentina, a todo el mundo le gusta *la comida, la cultura y la hospitalidad*.
## Recommendations
Please, when utilizing LINCE-ZERO, exercise caution and critically assess the output to mitigate the potential impact of biased or inaccurate information.
If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards.
Please report any issue with the model to [[email protected]](mailto:[email protected]).
# 📚 Training Details
## Training Data
LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated with the best quality into Spanish.
Alpaca is a 24.2 MB dataset of 52,002 instructions and demonstrations in English. It was generated by OpenAI's `text-davinci-003` engine using the data generation pipeline from the [Self-Instruct framework](https://github.com/yizhongw/self-instruct) with some modifications. For further details, refer to [Alpaca's Data Card](https://huggingface.co/datasets/tatsu-lab/alpaca).
Dolly is a 13.1 MB dataset of 15,011 instruction-following records in American English. It was generated by thousands of Databricks employees, who were requested to provide reference texts copied from Wikipedia for specific categories. To learn more, consult [Dolly’s Data Card](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
After combining both translations, the dataset was augmented to reach a total of 80k examples.
# ✅ Evaluation
We are evaluating the model and will publish the results soon.
### Results
Paper coming soon!
# ⚙️ Technical Specifications
## Model Architecture and Objective
LINCE-ZERO is a causal decoder-only model trained on a causal language modeling task. Its objective is to predict the next token in a sequence based on the context provided.
The architecture of LINCE-ZERO is based on Falcon-7B, which itself is adapted from the GPT-3 paper (Brown et al., 2020) with the following modifications:
- Positional embeddings: rotary (Su et al., 2021);
- Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
- Decoder-block: parallel attention/MLP with a single-layer norm.
## Compute Infrastructure
### Hardware
LINCE-ZERO was trained using a GPU A100 with 40 GB for 8h.
### Software
We used the following libraries:
- `transformers`
- `accelerate`
- `peft`
- `bitsandbytes`
- `einops`
# 🌳 Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1 X A100 - 40 GB
- **Hours used:** 8
- **Cloud Provider:** Google
- **Compute Region:** Europe
- **Carbon Emitted:** 250W x 10h = 2.5 kWh x 0.57 kg eq. CO2/kWh = 1.42 kg eq. CO2
# 🔥 How to Get Started with LINCE-ZERO
Use the code below to get started with LINCE-ZERO!
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer, GenerationConfig
model_id = "clibrain/lince-zero"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)
def create_instruction(instruction, input_data=None, context=None):
sections = {
"Instrucción": instruction,
"Entrada": input_data,
"Contexto": context,
}
system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n"
prompt = system_prompt
for title, content in sections.items():
if content is not None:
prompt += f"### {title}:\n{content}\n\n"
prompt += "### Respuesta:\n"
return prompt
def generate(
instruction,
input=None,
context=None,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs
):
prompt = create_instruction(instruction, input, context)
print(prompt.replace("### Respuesta:\n", ""))
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Respuesta:")[1].lstrip("\n")
instruction = "Dame una lista de lugares a visitar en España."
print(generate(instruction))
```
# 📝 Citation
There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite:
```markdown
@article{lince-zero,
title={{LINCE-ZERO}: Llm for Instructions from Natural Corpus en Español},
author={clibrain.com},
year={2023}
}
```
# 📧 Contact
[[email protected]](mailto:[email protected])
<!-- original-model-card end -->
|
Mr-Bhaskar/FusionBot | Mr-Bhaskar | 2024-05-21T19:10:51Z | 681 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:Mr-Bhaskar/Synthetic_Therapy_Conversations",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-17T19:48:29Z | ---
license: other
datasets:
- Mr-Bhaskar/Synthetic_Therapy_Conversations
---
---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TeeZee/Kyllene-34B-v1.1 | TeeZee | 2024-06-28T01:51:27Z | 681 | 15 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-17T20:52:58Z | ---
tags:
- merge
license: apache-2.0
---
# Kyllene 34B v1.1

## Model Details
- A result of new merge method provided by [MergeMonster](https://github.com/Gryphe/MergeMonster/) tool with extended RPG preset.
- models used for merge:
[jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
[NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
[NousResearch_Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
[SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B)
- Method is aimed to maximize probability of certain phrases and minimize probablility of other phrases.
- RPG preset was extened with examples of typical, nonsensical output of most models like 'unbreakable bond', 'send shivers down her spine' etc.
- The resulting model has approximately 34 billion parameters.
- See [mergekit-config.yml](https://huggingface.co/TeeZee/Kyllene-34B-v1.1/resolve/main/merge-config.yml) for details on the merge method used and RPG presets.
**Warning: This model can produce NSFW content!**
## Results
- produces SFW nad NSFW content without issues, switches context seamlessly.
- 200K context length
- good at following instructions
- different than [TeeZee/Kyllene-57B-v1.0](https://huggingface.co/TeeZee/Kyllene-57B-v1.0), but also surprisingly entertaining (but more tests are needed)
## Side notes
- [MergeMonster](https://github.com/Gryphe/MergeMonster/) method works, however project would benefit greatly from some more love from developers.
- In its current state MergeMonster consumes insane amounts of RAM (256GB+) or VRAM and takes a really long time to process model data, this merge took 24H on 1xADA6000
- MergeMonster is not a golden bullet, other experiments has shown that it can also produce incredibly stupid models.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> |
InferenceIllusionist/Excalibur-7b | InferenceIllusionist | 2024-06-11T04:43:50Z | 681 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:ibm/merlinite-7b",
"base_model:InferenceIllusionist/Magic-Dolphin-7b",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mlabonne/Monarch-7B",
"base_model:bardsai/jaskier-7b-dpo-v6.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T05:21:12Z | ---
base_model:
- ibm/merlinite-7b
- InferenceIllusionist/Magic-Dolphin-7b
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/Monarch-7B
- bardsai/jaskier-7b-dpo-v6.1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Excalibur-7b
<img src="https://i.imgur.com/viIO4WT.png" width="550"/>
<b> Update: A [fine-tuned version](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO/) of this model is now publicly available, along with benchmark results. If you're looking for a more conversational, assistant-style exchange you won't want to miss it!</b>
<i>Image generated with Envoid's [Model9](https://huggingface.co/Envoid/model9) SDXL model </i>
GGUFs can be found [here](https://huggingface.co/InferenceIllusionist/Excalibur-7b-GGUF)
Alternative GGUFs from [bartowski](https://huggingface.co/bartowski) can be found [here](https://huggingface.co/bartowski/Excalibur-7b-GGUF).
EXl2 can also be found [here](https://huggingface.co/bartowski/Excalibur-7b-exl2) again courtesy of [bartowski](https://huggingface.co/bartowski)!
### Performance Comparison
| Name | Avg. | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| <b>Excalibur-7b</b> | <u><b>73.6</b></u> | <u><b>69.71</b></u> | <u><b>87.56</b></u> | <u><b>65.66</b></u> | <u><b>67.24</b></u> | <u><b>82.79</b></u> | <u><b>68.61</b></u> |
| Magic-Dolphin-7b | 67.48 | 65.78 | 85.61 | 64.64 | 58.01 | 79.64 | 51.18 |
| merlinite-7b | 64 | 63.65 | 84.52 | 64.91 | 50.15 | 79.72 | 41.09 |
[* Open LLM Leaderboard Dataset](https://huggingface.co/datasets/open-llm-leaderboard/details_InferenceIllusionist__Excalibur-7B)
### Methodology
[Magic-Dolphin-7b](https://huggingface.co/InferenceIllusionist/Magic-Dolphin-7b) was an unexpected surprise. Profoundly satisfied with it as a first attempt. For this follow-up I wanted to target the MMLU benchmark specifically.
The challenge this time was placing more weight on Merlinite-7b as an unknown quantity that hasn't been in the spotlight despite its novel LAB tuning method.
<b>Excalibur-7b</b> builds on past success and is the culmination of several learnings:
* Measuring KL-divergences for new quantization types brought a deeper understanding of benchmarking and assessing model performance
* This signifcantly sped up the testing process by using MMLU as a base, narrowing down over 10 candidate linear merges to 1: merliniteX-blockB1
* Reaching the limitations of linear merging necessitated a pivot to reviewing the viability of SLERP, DARE-TIES, and Passthrough methods
* Thus a competing candidate merge pool was tested between different merge algorithms. Once more the list was narrowed from 10 candidates to 1: merliniteX-blockF2
* merliniteX-blockF2 (SLERP of Magic-Dolphin-7B and jaskier-7b-dpo in unorthadox proportions) was originally planned for release earlier this week
* Instead -blockB1 and -blockF2 were merged and the results were placed head to head in a final round of tests. Ultimately a more conventional execution of SLERP showed the best results for the final step.
# Sample Question
<img src="https://i.imgur.com/fdFYIhv.jpeg" width="550"/>
# Bonus Question - Vision Capabilities
<b>Requires additional [mistral-7b-mmproj-v1.5-Q4_1.gguf](https://huggingface.co/koboldcpp/mmproj/tree/main) file for vision functionality</b>
<img src="https://i.imgur.com/4wbUrjf.jpeg" width="550"/>
Select up the gguf file of your choice in Kobold as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
<img src="https://i.imgur.com/x8vqH29.png" width="550"/>
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [ibm/merlinite-7b](https://huggingface.co/ibm/merlinite-7b)
* [InferenceIllusionist/Magic-Dolphin-7b](https://huggingface.co/InferenceIllusionist/Magic-Dolphin-7b)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
* [bardsai/jaskier-7b-dpo-v6.1](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1)
### Configuration
The following YAML configurations were used to produce this model:
<b>merliniteX-blockB1</b>
```yaml
models:
- model: models/merlinite-7b
parameters:
weight: 1.0
- model: models/Kunoichi-DPO-v2-7B
parameters:
weight: 0.2
- model: models/jaskier-7b-dpo-v6.1
parameters:
weight: 0.6
- model: models/Monarch-7b
parameters:
weight: 0.4
merge_method: linear
dtype: float16
```
<b>merliniteX-blockF2</b>
```yaml
slices:
- sources:
- model: models/Magic-Dolphin-7b
layer_range: [0, 32]
- model: models/jaskier-7b-dpo-v6.1
layer_range: [0, 32]
merge_method: slerp
base_model: models/Magic-Dolphin-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0.5, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
<b>merliniteX-blockH1 (Excalibur-7b)</b>
```yaml
slices:
- sources:
- model: models/merliniteX-blockF2
layer_range: [0, 32]
- model: models/merliniteX-blockB1
layer_range: [0, 32]
merge_method: slerp
base_model: models/merliniteX-blockF2
parameters:
t:
- filter: self_attn
value: [1, 0.7, 0.3, 0.5, 0]
- filter: mlp
value: [0, 0.3, 0.7, 0.5, 1]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
allknowingroger/Platapus-Orca-13B | allknowingroger | 2024-04-10T18:08:11Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:garage-bAInd/Platypus2-13B",
"base_model:psmathur/orca_mini_v3_13b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-21T07:46:09Z | ---
base_model:
- garage-bAInd/Platypus2-13B
- psmathur/orca_mini_v3_13b
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [garage-bAInd/Platypus2-13B](https://huggingface.co/garage-bAInd/Platypus2-13B)
* [psmathur/orca_mini_v3_13b](https://huggingface.co/psmathur/orca_mini_v3_13b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: psmathur/orca_mini_v3_13b
layer_range: [0, 40]
- model: garage-bAInd/Platypus2-13B
layer_range: [0, 40]
# or, the equivalent models: syntax:
# models:
# - model: psmathur/orca_mini_v3_13b
# - model: garage-bAInd/Platypus2-13B
merge_method: slerp
base_model: psmathur/orca_mini_v3_13b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
cognitivecomputations/dolphin-2.8-gemma-2b | cognitivecomputations | 2024-03-24T01:28:27Z | 681 | 12 | transformers | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T01:25:27Z | Entry not found |
allknowingroger/LadybirdGonzo-7B-slerp | allknowingroger | 2024-04-10T18:43:38Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Badgids/Gonzo-Chat-7B",
"bobofrut/ladybird-base-7B-v8",
"base_model:Badgids/Gonzo-Chat-7B",
"base_model:bobofrut/ladybird-base-7B-v8",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-31T07:19:52Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Badgids/Gonzo-Chat-7B
- bobofrut/ladybird-base-7B-v8
base_model:
- Badgids/Gonzo-Chat-7B
- bobofrut/ladybird-base-7B-v8
license: apache-2.0
---
# LadybirdGonzo-7B-slerp
LadybirdGonzo-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Badgids/Gonzo-Chat-7B](https://huggingface.co/Badgids/Gonzo-Chat-7B)
* [bobofrut/ladybird-base-7B-v8](https://huggingface.co/bobofrut/ladybird-base-7B-v8)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Badgids/Gonzo-Chat-7B
layer_range: [0, 32]
- model: bobofrut/ladybird-base-7B-v8
layer_range: [0, 32]
merge_method: slerp
base_model: Badgids/Gonzo-Chat-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/LadybirdGonzo-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ChristianAzinn/mxbai-embed-large-v1-gguf | ChristianAzinn | 2024-04-07T21:56:31Z | 681 | 1 | sentence-transformers | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"transformers.js",
"feature-extraction",
"en",
"arxiv:2309.12871",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | feature-extraction | 2024-04-07T20:23:25Z | ---
base_model: mixedbread-ai/mxbai-embed-large-v1
inference: false
language:
- en
license: apache-2.0
model_creator: mixedbread-ai
model_name: mxbai-embed-large-v1
model_type: bert
quantized_by: ChristianAzinn
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- mteb
- transformers
- transformers.js
- gguf
---
# mxbai-embed-large-v1-gguf
Model creator: [MixedBread AI](https://huggingface.co/mixedbread-ai)
Original model: [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1)
## Original Description
This is our base sentence embedding model. It was trained using [AnglE](https://arxiv.org/abs/2309.12871) loss on our high-quality large scale data. It achieves SOTA performance on BERT-large scale. Find out more in our [blog post](https://mixedbread.ai/blog/mxbai-embed-large-v1).
## Description
This repo contains GGUF format files for the [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) embedding model.
These files were converted and quantized with llama.cpp [PR 5500](https://github.com/ggerganov/llama.cpp/pull/5500), commit [34aa045de](https://github.com/ggerganov/llama.cpp/pull/5500/commits/34aa045de44271ff7ad42858c75739303b8dc6eb), on a consumer RTX 4090.
This model supports up to 512 tokens of context.
## Compatibility
These files are compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp) as of commit [4524290e8](https://github.com/ggerganov/llama.cpp/commit/4524290e87b8e107cc2b56e1251751546f4b9051), as well as [LM Studio](https://lmstudio.ai/) as of version 0.2.19.
# Meta-information
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
## Provided Files
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ---- |
| [mxbai-embed-large-v1.Q2_K.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q2_K.gguf) | Q2_K | 2 | 144 MB | smallest, significant quality loss - not recommended for most purposes |
| [mxbai-embed-large-v1.Q3_K_S.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 160 MB | very small, high quality loss |
| [mxbai-embed-large-v1.Q3_K_M.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 181 MB | very small, high quality loss |
| [mxbai-embed-large-v1.Q3_K_L.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 198 MB | small, substantial quality loss |
| [mxbai-embed-large-v1.Q4_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q4_0.gguf) | Q4_0 | 4 | 200 MB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mxbai-embed-large-v1.Q4_K_S.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 203 MB | small, greater quality loss |
| [mxbai-embed-large-v1.Q4_K_M.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 216 MB | medium, balanced quality - recommended |
| [mxbai-embed-large-v1.Q5_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q5_0.gguf) | Q5_0 | 5 | 237 MB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mxbai-embed-large-v1.Q5_K_S.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 237 MB | large, low quality loss - recommended |
| [mxbai-embed-large-v1.Q5_K_M.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 246 MB | large, very low quality loss - recommended |
| [mxbai-embed-large-v1.Q6_K.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q6_K.gguf) | Q6_K | 6 | 278 MB | very large, extremely low quality loss |
| [mxbai-embed-large-v1.Q8_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q8_0.gguf) | Q8_0 | 8 | 358 MB | very large, extremely low quality loss - recommended |
| [mxbai-embed-large-v1.Q8_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1_fp16.gguf) | FP16 | 16 | 670 MB | enormous, pretty much the original model - not recommended |
| [mxbai-embed-large-v1.Q8_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1_fp32.gguf) | FP32 | 32 | 1.34 GB | enormous, pretty much the original model - not recommended |
# Examples
## Example Usage with `llama.cpp`
To compute a single embedding, build llama.cpp and run:
```shell
./embedding -ngl 99 -m [filepath-to-gguf].gguf -p 'search_query: What is TSNE?'
```
You can also submit a batch of texts to embed, as long as the total number of tokens does not exceed the context length. Only the first three embeddings are shown by the `embedding` example.
`texts.txt`:
```
search_query: What is TSNE?
search_query: Who is Laurens Van der Maaten?
```
Compute multiple embeddings:
```shell
./embedding -ngl 99 -m [filepath-to-gguf].gguf -f texts.txt
```
## Example Usage with LM Studio
Download the 0.2.19 beta build from here: [Windows](https://releases.lmstudio.ai/windows/0.2.19/beta/LM-Studio-0.2.19-Setup-Preview-1.exe) [MacOS](https://releases.lmstudio.ai/mac/arm64/0.2.19/beta/LM-Studio-darwin-arm64-0.2.19-Preview-1.zip) [Linux](https://releases.lmstudio.ai/linux/0.2.19/beta/LM_Studio-0.2.19-Preview-1.AppImage)
Once installed, open the app. The home should look like this:

Search for either "ChristianAzinn" in the main search bar or go to the "Search" tab on the left menu and search the name there.

Select your model from those that appear (this example uses `bge-small-en-v1.5-gguf`) and select which quantization you want to download. Since this model is pretty small, I recommend Q8_0, if not f16/32. Generally, the lower you go in the list (or the bigger the number gets), the larger the file and the better the performance.

You will see a green checkmark and the word "Downloaded" once the model has successfully downloaded, which can take some time depending on your network speeds.

Once this model is finished downloading, navigate to the "Local Server" tab on the left menu and open the loader for text embedding models. This loader does not appear before version 0.2.19, so ensure you downloaded the correct version.

Select the model you just downloaded from the dropdown that appears to load it. You may need to play with configuratios in the right-side menu, such as GPU offload if it doesn't fit entirely into VRAM.

All that's left to do is to hit the "Start Server" button:

And if you see text like that shown below in the console, you're good to go! You can use this as a drop-in replacement for the OpenAI embeddings API in any application that requires it, or you can query the endpoint directly to test it out.

Example curl request to the API endpoint:
```shell
curl http://localhost:1234/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"input": "Your text string goes here",
"model": "model-identifier-here"
}'
```
For more information, see the LM Studio [text embedding documentation](https://lmstudio.ai/docs/text-embeddings).
## Acknowledgements
Thanks to the LM Studio team and everyone else working on open-source AI.
This README is inspired by that of [nomic-ai-embed-text-v1.5-GGUF](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF), another excellent embedding model, and those of the legendary [TheBloke](https://huggingface.co/TheBloke). |
mahiatlinux/ShadowM7EXP-7B | mahiatlinux | 2024-04-10T09:20:08Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:liminerity/M7-7b",
"base_model:automerger/YamshadowExperiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-10T02:53:25Z | ---
base_model:
- liminerity/M7-7b
- automerger/YamshadowExperiment28-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
- model: liminerity/M7-7b
layer_range: [0, 32]
merge_method: slerp
base_model: automerger/YamshadowExperiment28-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
arvindanand/ValidateAI-3-33B-Ties | arvindanand | 2024-04-11T19:19:20Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"WizardLM/WizardCoder-33B-V1.1",
"codefuse-ai/CodeFuse-DeepSeek-33B",
"deepseek-ai/deepseek-coder-33b-instruct",
"conversational",
"base_model:deepseek-ai/deepseek-coder-33b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T19:18:25Z | ---
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardCoder-33B-V1.1
- codefuse-ai/CodeFuse-DeepSeek-33B
- deepseek-ai/deepseek-coder-33b-instruct
base_model:
- deepseek-ai/deepseek-coder-33b-instruct
license: apache-2.0
---
# ValidateAI-2-33B-Ties
ValidateAI-2-33B-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [deepseek-ai/deepseek-coder-33b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct)
* [WizardLM/WizardCoder-33B-V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1)
* [codefuse-ai/CodeFuse-DeepSeek-33B](https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B)
*
## 🧩 Configuration
```yaml
models:
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
density: 1
weight: .5
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
density: 1
weight: .5
merge_method: ties
base_model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arvindanand/ValidateAI-3-33B-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
G-reen/EXPERIMENT-ORPO-m7b2-1-merged | G-reen | 2024-04-16T18:37:33Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-16T06:55:01Z | ---
license: "apache-2.0"
---
*This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.*
**Benchmarks**
Average 59.62
ARC 59.39
HellaSwag 82.48
MMLU 62.61
TruthfulQA 40.38
Winogrande 78.37
GSM8K 34.5
**Training Details**
Duration: ~9 hours on one Kaggle T4 with Unsloth
Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit
Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k
Rank: 8
Alpha: 16
Learning rate: 5e-5
Beta: 0.1
Batch size: 8
Epochs: 1
Learning rate scheduler: Linear
Prompt Format: ChatML
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Why is the sky blue?<|im_end|>
<|im_start|>assistant
```
**WanDB Reports**



[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
netcat420/MFANNv0.5 | netcat420 | 2024-04-17T17:44:43Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2024-04-16T07:03:58Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
MFANN chain of thought experiment developed my makhi burroughs.
3b version here: https://huggingface.co/netcat420/MFANN3bv0.4
BENCHMARKS: avg: 72.23 ARC: 68.86 HellaSwag: 86.65 MMLU: 63.63 TruthfulQA: 70.18 winogrande: 79.72 GSM8K: 64.37


|
Noodlz/WizardLaker-7B | Noodlz | 2024-04-17T07:15:06Z | 681 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-17T04:48:49Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# WizardLaker 7B

This is a merge of the new WizardLM 2 7B model with my custom DolphinLake Model(https://huggingface.co/Noodlz/DolphinLake-7B). Seems to perform well. will be submitting for evals on openLLM leaderboards.
Created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using amazingvince/Not-WizardLM-2-7B as a base.
### Models Merged
The following models were included in the merge:
* /Noodlz/DolphinLake-7B
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
parameters:
int8_mask: true
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
embed_slerp: true
models:
- model: amazingvince/Not-WizardLM-2-7B
# No parameters necessary for base model
- model: /Noodlz/DolphinLake-7B
parameters:
density: 0.58
weight: 0.4
base_model: amazingvince/Not-WizardLM-2-7B
tokenizer_source: model:amazingvince/Not-WizardLM-2-7B
dtype: bfloat16
``` |
allknowingroger/WestLakeLaser-12B-MoE | allknowingroger | 2024-04-18T07:42:44Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/PrometheusLaser-7B-slerp",
"senseable/WestLake-7B-v2",
"base_model:allknowingroger/PrometheusLaser-7B-slerp",
"base_model:senseable/WestLake-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-18T07:35:40Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- allknowingroger/PrometheusLaser-7B-slerp
- senseable/WestLake-7B-v2
base_model:
- allknowingroger/PrometheusLaser-7B-slerp
- senseable/WestLake-7B-v2
---
# WestLakeLaser-12B-MoE
WestLakeLaser-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/PrometheusLaser-7B-slerp](https://huggingface.co/allknowingroger/PrometheusLaser-7B-slerp)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
## 🧩 Configuration
```yaml
base_model: allknowingroger/PrometheusLaser-7B-slerp
experts:
- source_model: allknowingroger/PrometheusLaser-7B-slerp
positive_prompts: ["what"]
- source_model: senseable/WestLake-7B-v2
positive_prompts: ["why"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/WestLakeLaser-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
abhishek/autotrain-llama3-8b-open-hermes-sft | abhishek | 2024-04-19T14:46:07Z | 681 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-19T11:38:35Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
chlee10/T3Q-Mistral-Orca-Math-dpo-v2.0 | chlee10 | 2024-04-21T07:07:31Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-21T07:00:48Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TitleOS/ExperimentOne | TitleOS | 2024-04-26T23:16:05Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-26T22:59:14Z | ---
base_model:
- mistralai/Mistral-7B-v0.1
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NousResearch/Hermes-2-Pro-Mistral-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# ExperimentOne (Mistral-Hermes-Dolphin-7b)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
``` |
chujiezheng/zephyr_0.2_a2.5 | chujiezheng | 2024-04-28T05:33:41Z | 681 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:2404.16792",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-28T05:17:53Z | ---
license: apache-2.0
language:
- en
---
# zephyr_0.2_a2.5
The extrapolated (ExPO) model based on `chujiezheng/zephyr_0.2` and `alignment-handbook/zephyr-7b-sft-full`, as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
Specifically, we obtain this model by extrapolating from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.