modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Yntec/epiCVision | Yntec | "2023-08-17T03:53:13Z" | 1,267 | 10 | diffusers | [
"diffusers",
"safetensors",
"Photorealistic",
"Realistic",
"Analog",
"Portrait",
"Semi-Realistic",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"SG_161222",
"epinikion",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-17T03:14:17Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Photorealistic
- Realistic
- Analog
- Portrait
- Semi-Realistic
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- SG_161222
- epinikion
---
# epiCVision
A mix of epicRealism and realisticVision. I don't like false modesty, I claim this is better than either model:

(click for larger)
Sample and prompt:

very cute princess with curly hair wearing choker who would marry me
Original pages:
https://civitai.com/models/25694?modelVersionId=30761
https://civitai.com/models/4201?modelVersionId=5196
Full recipe:
# Add Difference 1.0
Primary model:
epicRealism
Secondary model:
epicRealism
Tertiary model:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output Model:
Temporary
# Weighted Sum 0.70
Primary model:
RealisticVision
Secondary model:
Temporary
Output Model:
epiCVision |
MaziyarPanahi/WizardLM-Math-70B-v0.1 | MaziyarPanahi | "2024-03-04T13:34:47Z" | 1,267 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"wizardlm",
"mique",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-14T00:03:45Z" | ---
license: agpl-3.0
tags:
- merge
- wizardlm
- mique
pipeline_tag: text-generation
model-index:
- name: WizardLM-Math-70B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.07
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
---
## Examples
There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step.
```
Step 1: Identify the initial situation
There are three killers in a room, which we can label as A, B, and C.
Step 2: Describe the event
Someone (let's call this person D) enters the room and kills one of the killers. For the sake of simplicity, let's say that Killer A is killed.
Step 3: Analyze the new situation
After the event, there are now two killers left in the room: B and C. The person who entered the room, D, is also still present. However, since D did not leave the room after killing A, it means that D is also a killer. Therefore, there are actually three killers in the room: B, C, and D.
Final Answer: After the event, there are three killers left in the room: B, C, and D.
```
Mark is faster than Mary , Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step
```
Step 1: Understand the given information
We know that Mark is faster than Mary, and Mary is faster than Joe. This means that Mark is the fastest among the three, followed by Mary, and then Joe.
Step 2: Analyze the question
The question asks if Joe is faster than Mark. Based on the given information, we can conclude that Joe is not faster than Mark. In fact, Mark is faster than Joe.
Final Answer: No, Joe is not faster than Mark.
```
If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously?
```
The time taken for a shirt to dry depends on several factors such as the material of the shirt, the humidity, temperature, wind speed, etc. However, if we assume that all four shirts have similar properties and the weather conditions remain constant, then you can expect the drying time to be proportional to the number of shirts.
In other words, if two shirts take 5 hours to dry, then four shirts will likely take twice as long, or 10 hours. This is because there is more fabric to dry, which means it will absorb more water from the air, thus taking longer to reach equilibrium with the surrounding environment.
However, this is just an estimate, and the actual drying time may vary depending on the specific circumstances.
```
Wrong, it takes the same amount of time!
explain step by step 25-4*2+3=?
```
Step 1: Perform the multiplication operation first.
25 - 4 * 2 + 3 = 25 - 8 + 3
Step 2: Now, perform the subtraction and addition operations from left to right.
25 - 8 + 3 = 17 + 3
Step 3: Finally, perform the remaining addition operation.
17 + 3 = 20
So, 25-4*2+3 equals 20.
```
## Eval
### WizardLM/WizardLM-70B-V1.0 vs. MaziyarPanahi/WizardLM-Math-70B-v0.1

### Leaderboard
```python
{
"all": {
"acc": 0.6914116069568377,
"acc_stderr": 0.03063431437342948,
"acc_norm": 0.6938613221179539,
"acc_norm_stderr": 0.031238741076549784,
"mc1": 0.40269277845777235,
"mc1_stderr": 0.01716883093518722,
"mc2": 0.5707095526544473,
"mc2_stderr": 0.01525040450448649
},
"harness|arc:challenge|25": {
"acc": 0.6322525597269625,
"acc_stderr": 0.014090995618168482,
"acc_norm": 0.6706484641638225,
"acc_norm_stderr": 0.013734057652635474
},
"harness|hellaswag|10": {
"acc": 0.6746664011153157,
"acc_stderr": 0.0046754187743142306,
"acc_norm": 0.8600876319458275,
"acc_norm_stderr": 0.0034618713240671846
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.03317672787533157,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.03317672787533157
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7283018867924528,
"acc_stderr": 0.027377706624670713,
"acc_norm": 0.7283018867924528,
"acc_norm_stderr": 0.027377706624670713
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8194444444444444,
"acc_stderr": 0.032166008088022675,
"acc_norm": 0.8194444444444444,
"acc_norm_stderr": 0.032166008088022675
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.035331333893236574,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.035331333893236574
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.047551296160629475,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.047551296160629475
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.676595744680851,
"acc_stderr": 0.030579442773610337,
"acc_norm": 0.676595744680851,
"acc_norm_stderr": 0.030579442773610337
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.04113914981189261,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.04113914981189261
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4497354497354497,
"acc_stderr": 0.02562085704293665,
"acc_norm": 0.4497354497354497,
"acc_norm_stderr": 0.02562085704293665
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8129032258064516,
"acc_stderr": 0.022185710092252252,
"acc_norm": 0.8129032258064516,
"acc_norm_stderr": 0.022185710092252252
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8363636363636363,
"acc_stderr": 0.02888787239548795,
"acc_norm": 0.8363636363636363,
"acc_norm_stderr": 0.02888787239548795
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822502,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822502
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.927461139896373,
"acc_stderr": 0.018718998520678178,
"acc_norm": 0.927461139896373,
"acc_norm_stderr": 0.018718998520678178
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7025641025641025,
"acc_stderr": 0.023177408131465953,
"acc_norm": 0.7025641025641025,
"acc_norm_stderr": 0.023177408131465953
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.02904560029061626,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.02904560029061626
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.02626502460827588,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.02626502460827588
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4503311258278146,
"acc_stderr": 0.04062290018683776,
"acc_norm": 0.4503311258278146,
"acc_norm_stderr": 0.04062290018683776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8954128440366973,
"acc_stderr": 0.013120530245265593,
"acc_norm": 0.8954128440366973,
"acc_norm_stderr": 0.013120530245265593
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5787037037037037,
"acc_stderr": 0.03367462138896078,
"acc_norm": 0.5787037037037037,
"acc_norm_stderr": 0.03367462138896078
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9166666666666666,
"acc_stderr": 0.019398452135813905,
"acc_norm": 0.9166666666666666,
"acc_norm_stderr": 0.019398452135813905
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8860759493670886,
"acc_stderr": 0.020681745135884565,
"acc_norm": 0.8860759493670886,
"acc_norm_stderr": 0.020681745135884565
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.757847533632287,
"acc_stderr": 0.028751392398694755,
"acc_norm": 0.757847533632287,
"acc_norm_stderr": 0.028751392398694755
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097655,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097655
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037181,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742179,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742179
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573974,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573974
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.905982905982906,
"acc_stderr": 0.019119892798924974,
"acc_norm": 0.905982905982906,
"acc_norm_stderr": 0.019119892798924974
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526066,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526066
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8697318007662835,
"acc_stderr": 0.012036729568216054,
"acc_norm": 0.8697318007662835,
"acc_norm_stderr": 0.012036729568216054
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7774566473988439,
"acc_stderr": 0.02239421566194282,
"acc_norm": 0.7774566473988439,
"acc_norm_stderr": 0.02239421566194282
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5553072625698324,
"acc_stderr": 0.016619881988177012,
"acc_norm": 0.5553072625698324,
"acc_norm_stderr": 0.016619881988177012
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.024739981355113592,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.024739981355113592
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.77491961414791,
"acc_stderr": 0.023720088516179027,
"acc_norm": 0.77491961414791,
"acc_norm_stderr": 0.023720088516179027
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.02240967454730417,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.02240967454730417
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5390070921985816,
"acc_stderr": 0.029736592526424445,
"acc_norm": 0.5390070921985816,
"acc_norm_stderr": 0.029736592526424445
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5586701434159062,
"acc_stderr": 0.012682016335646683,
"acc_norm": 0.5586701434159062,
"acc_norm_stderr": 0.012682016335646683
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7242647058823529,
"acc_stderr": 0.027146271936625162,
"acc_norm": 0.7242647058823529,
"acc_norm_stderr": 0.027146271936625162
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.017242385828779627,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.017242385828779627
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7877551020408163,
"acc_stderr": 0.026176967197866767,
"acc_norm": 0.7877551020408163,
"acc_norm_stderr": 0.026176967197866767
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8805970149253731,
"acc_stderr": 0.02292879327721974,
"acc_norm": 0.8805970149253731,
"acc_norm_stderr": 0.02292879327721974
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8596491228070176,
"acc_stderr": 0.0266405825391332,
"acc_norm": 0.8596491228070176,
"acc_norm_stderr": 0.0266405825391332
},
"harness|truthfulqa:mc|0": {
"mc1": 0.40269277845777235,
"mc1_stderr": 0.01716883093518722,
"mc2": 0.5707095526544473,
"mc2_stderr": 0.01525040450448649
},
"harness|winogrande|5": {
"acc": 0.8176795580110497,
"acc_stderr": 0.010851565594267207
},
"harness|gsm8k|5": {
"acc": 0.6444275966641395,
"acc_stderr": 0.013185402252713852
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__WizardLM-Math-70B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.92|
|AI2 Reasoning Challenge (25-Shot)|67.06|
|HellaSwag (10-Shot) |86.01|
|MMLU (5-Shot) |69.14|
|TruthfulQA (0-shot) |57.07|
|Winogrande (5-shot) |81.77|
|GSM8k (5-shot) |64.44|
|
yanolja/EEVE-Korean-2.8B-v1.0 | yanolja | "2024-02-24T13:54:36Z" | 1,267 | 9 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"arxiv:2402.14714",
"base_model:microsoft/phi-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-22T04:39:37Z" | ---
license: apache-2.0
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: yanolja/EEVE-Korean-2.8B-v1.0
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/>
<p>
# EEVE-Korean-2.8B-v1.0
## Join Our Community on Discord!
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
## Our Dedicated Team (Alphabetical Order)
| Research | Engineering | Product Management | UX Design |
|-----------------|-----------------|--------------------|--------------
| Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi |
| Seungduk Kim | Rifqi Alfi | | |
| Seungtaek Choi | Sanghoon Han | | |
| | Suhyun Kang | | |
## About the Model
This model is a Korean vocabulary-extended version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2), specifically fine-tuned on various Korean web-crawled datasets available on HuggingFace. Our approach was to expand the model's understanding of Korean by pre-training the embeddings for new tokens and partially fine-tuning the `lm_head` embeddings for the already existing tokens while preserving the original parameters of the base model.
### Technical Deep Dive
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0/resolve/main/EEVE_figure.png" width="100%"/>
<p>
To adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.
This approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.
Our method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.
We leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.
For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).
Here’s an simplified code for our key approach:
```python
# number_of_old_tokens is the size of tokenizer before vocab extension. For example, in case of EEVE-Korean-10.8B-v1.0, number_of_old_tokens is 32000.
def freeze_partial_embedding_hook(grad):
grad[:number_of_old_tokens] = 0
return grad
for name, param in model.named_parameters():
if ("lm_head" in name or "embed_tokens" in name) and "original" not in name:
param.requires_grad = True
if "embed_tokens" in name:
param.register_hook(freeze_partial_embedding_hook)
else:
param.requires_grad = False
```
### Usage and Limitations
Keep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.
### Training Details
Our model’s training was comprehensive and diverse:
- **Vocabulary Expansion:**
We meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.
1. **Initial Tokenizer Training:** We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.
2. **Extraction of New Korean Tokens:** From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.
3. **Manual Tokenizer Construction:** We then built the target tokenizer, focusing on these new Korean tokens.
4. **Frequency Analysis:** Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.
5. **Refinement of Token List:** We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.
6. **Inclusion of Single-Letter Characters:** Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.
7. **Iterative Refinement:** We repeated steps 2 to 6 until there were no tokens to drop or add.
8. **Training Bias Towards New Tokens:** Our training data was biased to include more texts with new tokens, for effective learning.
This rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model.
## Citation
```
@misc{kim2024efficient,
title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models},
author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
year={2024},
eprint={2402.14714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Jayant9928/orpo_med_v0 | Jayant9928 | "2024-05-21T12:44:00Z" | 1,267 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-30T11:30:12Z" | ---
license: apache-2.0
---
Model Card for Model ID Model Details Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
Developed by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Model type: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Finetuned from model [optional]: [More Information Needed] Model Sources [optional] Repository: [More Information Needed] Paper [optional]: [More Information Needed] Demo [optional]: [More Information Needed] Uses Direct Use [More Information Needed]
Downstream Use [optional] [More Information Needed]
Out-of-Scope Use [More Information Needed]
Bias, Risks, and Limitations [More Information Needed]
Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model Use the code below to get started with the model.
[More Information Needed]
Training Details Training Data [More Information Needed]
Training Procedure Preprocessing [optional] [More Information Needed]
Training Hyperparameters Training regime: [More Information Needed] Speeds, Sizes, Times [optional] [More Information Needed]
Evaluation Testing Data, Factors & Metrics Testing Data [More Information Needed]
Factors [More Information Needed]
Metrics [More Information Needed]
Results [More Information Needed]
Summary Model Examination [optional] [More Information Needed] |
TheBloke/Llama-2-Coder-7B-GGUF | TheBloke | "2023-09-27T12:49:06Z" | 1,266 | 13 | transformers | [
"transformers",
"gguf",
"llama",
"generated_from_trainer",
"code",
"coding",
"text-generation",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"base_model:mrm8488/llama-2-coder-7b",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-13T16:28:12Z" | ---
language:
- code
license: apache-2.0
tags:
- generated_from_trainer
- code
- coding
- llama
datasets:
- HuggingFaceH4/CodeAlpaca_20K
base_model: mrm8488/llama-2-coder-7b
inference: false
model_creator: mrm8488
model_type: llama
pipeline_tag: text-generation
prompt_template: 'You are a coding assistant that will help the user to resolve the
following instruction:
### Instruction: {prompt}
### Solution:
'
quantized_by: TheBloke
thumbnail: https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png
model-index:
- name: FalCoder
results: []
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 Coder 7B - GGUF
- Model creator: [mrm8488](https://huggingface.co/mrm8488)
- Original model: [Llama 2 Coder 7B](https://huggingface.co/mrm8488/llama-2-coder-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [mrm8488's Llama 2 Coder 7B](https://huggingface.co/mrm8488/llama-2-coder-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-Coder-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF)
* [mrm8488's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mrm8488/llama-2-coder-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: CodingAssistant
```
You are a coding assistant that will help the user to resolve the following instruction:
### Instruction: {prompt}
### Solution:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [mrm8488's Llama 2 Coder 7B](https://huggingface.co/mrm8488/llama-2-coder-7b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-coder-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-coder-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-2-coder-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-2-coder-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-2-coder-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-coder-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-2-coder-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-2-coder-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-coder-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-2-coder-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-2-coder-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-2-coder-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF/blob/main/llama-2-coder-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-Coder-7B-GGUF and below it, a specific filename to download, such as: llama-2-coder-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-Coder-7B-GGUF llama-2-coder-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-Coder-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-Coder-7B-GGUF llama-2-coder-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-coder-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a coding assistant that will help the user to resolve the following instruction:\n### Instruction: {prompt}\n\n### Solution:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-Coder-7B-GGUF", model_file="llama-2-coder-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: mrm8488's Llama 2 Coder 7B
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png" alt="llama-2 coder logo"">
</div>
# LlaMa 2 Coder 🦙👩💻
**LlaMa-2 7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
## Model description 🧠
[Llama-2](https://huggingface.co/meta-llama/Llama-2-7b)
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
## Training and evaluation data 📚
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
### Training hyperparameters ⚙
```py
optim="paged_adamw_32bit",
num_train_epochs = 2,
eval_steps=50,
save_steps=50,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=2,
seed=66,
load_best_model_at_end=True,
logging_steps=1,
learning_rate=2e-4,
fp16=True,
bf16=False,
max_grad_norm=0.3,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant"
```
### Training results 🗒️
| Step | Training Loss | Validation Loss |
|------|----------|----------|
| 50 | 0.624400 | 0.600070 |
| 100 | 0.634100 | 0.592757 |
| 150 | 0.545800 | 0.586652 |
| 200 | 0.572500 | 0.577525 |
| 250 | 0.528000 | 0.590118 |
### Eval results 📊
WIP
### Example of usage 👩💻
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_id = "mrm8488/llama-2-coder-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
def create_prompt(instruction):
system = "You are a coding assistant that will help the user to resolve the following instruction:"
instruction = "### Instruction: " + instruction
return system + "\n" + instruction + "\n\n" + "### Solution:" + "\n"
def generate(
instruction,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs,
):
prompt = create_prompt(instruction)
print(prompt)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Solution:")[1].lstrip("\n")
instruction = """
Edit the following XML code to add a navigation bar to the top of a web page
<html>
<head>
<title>CliBrAIn</title>
</head>
"""
print(generate(instruction))
```
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { llama-2-coder-7b (Revision d30d193) },
year = 2023,
url = { https://huggingface.co/mrm8488/llama-2-coder-7b },
doi = { 10.57967/hf/0931 },
publisher = { Hugging Face }
}
```
<!-- original-model-card end -->
|
Locutusque/Hyperion-2.1-Mistral-7B | Locutusque | "2024-03-12T20:38:14Z" | 1,266 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Locutusque/hyperion-v2.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-10T17:20:02Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- Locutusque/hyperion-v2.0
language:
- en
---
# Description
Further fine-tuned Locutusque/Hyperion-2.0-Mistral-7B at a higher learning rate. This was done to see if performance increased. Read Locutusque/Hyperion-2.0-Mistral-7B's model card for more information. Slight performance gain was observed. More checkpoints will be released in the future.
# Disclaimer
This model is very compliant. It will respond to any request without refusal. If you intend to deploy this model at an enterprise level, I would recommend aligning this model using DPO.
## Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF
AWQ: https://huggingface.co/solidrust/Hyperion-2.1-Mistral-7B-AWQ
|
Nekochu/Confluence-Renegade-7B | Nekochu | "2024-03-30T14:24:58Z" | 1,266 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:localfultonextractor/Erosumika-7B",
"base_model:Nitral-AI/Infinitely-Laydiculous-7B",
"base_model:Kunocchini-7b-128k-test",
"base_model:Endevor/EndlessRP-v3-7B",
"base_model:ChaoticNeutrals/BuRP_7B",
"base_model:daybreak-kunoichi-2dpo-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T10:05:26Z" | ---
license: apache-2.0
base_model:
- localfultonextractor/Erosumika-7B
- Nitral-AI/Infinitely-Laydiculous-7B
- Kunocchini-7b-128k-test
- Endevor/EndlessRP-v3-7B
- ChaoticNeutrals/BuRP_7B
- daybreak-kunoichi-2dpo-7b
library_name: transformers
tags:
- mergekit
- merge
---
My first merge of RP models 7B using mergekit, They are just r/ trend RP, half is BuRP_7B. not used any, **Dumb** merge but hopfully lucky merge! ^^'
## Update 03/2024:
- Original model Card Confluence-Renegade-7B <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Renegade-7B-v2-8.0bpw-h8-exl2">[8.0bpw-exl]</a>
- Added Model and merge recipe branch: <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/Confluence-Renegade-7B-v2/mergekit_config.yml">Confluence-Renegade-7B-v2</a>
- Added Model and merge recipe branch: <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/RoleBeagle-Moistral-11B-v2/mergekit_config.yml">RoleBeagle-Moistral-11B-v2</a> <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/RoleBeagle-Moistral-7B-v2/mergekit_config.yml">[7B truncated]</a> and Quants <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/RoleBeagle-Moistral-11B-v2-2.4bpw-h6-exl2/mergekit_config.yml">RoleBeagle-Moistral-11B-v2-2.4bpw-h6-exl2</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/RoleBeagle-Moistral-11B-v2-4.25bpw-h6-exl2">4.25bpw-h6</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/RoleBeagle-Moistral-11B-v2-8.0bpw-h8-exl2">8.0bpw-h8</a>
- Added Branch: <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/blob/Confluence-Shortcake-20B/mergekit_config.yml"> Confluence-Shortcake-20B Model recipes</a> and Quants here <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Shortcake-20B-2.4bpw-h6-exl2">Confluence-Shortcake-20B-2.4bpw-h6-exl2</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Shortcake-20B-4.25bpw-h6-exl2">4.25bpw-h6</a>, <a href="https://huggingface.co/Nekochu/Confluence-Renegade-7B/tree/Confluence-Shortcake-20B-8.0bpw-h8-exl2">8.0bpw-h8</a>
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/d38LuOG.png" alt="Nekochu" style="width: 250%; min-width: 400px; display: block; margin: auto;">
</div>
Name symbolize by *Confluence* for many unique RP model with *Renegade* mostly come from no-guardrail.
## Download branch instructions
```shell
git clone --single-branch --branch Confluence-Shortcake-20B-2.4bpw-h6-exl2 https://huggingface.co/Nekochu/Confluence-Renegade-7B
```
### Configuration Confluence-Renegade-7B
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./modela/Erosumika-7B
parameters:
density: [1, 0.8, 0.6]
weight: 0.2
- model: ./modela/Infinitely-Laydiculous-7B
parameters:
density: [0.9, 0.7, 0.5]
weight: 0.2
- model: ./modela/Kunocchini-7b-128k-test
parameters:
density: [0.8, 0.6, 0.4]
weight: 0.2
- model: ./modela/EndlessRP-v3-7B
parameters:
density: [0.7, 0.5, 0.3]
weight: 0.2
- model: ./modela/daybreak-kunoichi-2dpo-7b
parameters:
density: [0.5, 0.3, 0.1]
weight: 0.2
merge_method: dare_linear
base_model: ./modela/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
name: intermediate-model
---
slices:
- sources:
- model: intermediate-model
layer_range: [0, 32]
- model: ./modela/BuRP_7B
layer_range: [0, 32]
merge_method: slerp
base_model: intermediate-model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
name: gradient-slerp
```
```mergekit-mega config.yml ./output-model-directory --cuda --allow-crimes --lazy-unpickle```
### Models Merged Confluence-Renegade-7B
The following models were included in the merge:
- [localfultonextractor/Erosumika-7B](https://huggingface.co/localfultonextractor/Erosumika-7B)
- [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
- [Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
- [Endevor/EndlessRP-v3-7B](https://huggingface.co/Endevor/EndlessRP-v3-7B)
- [ChaoticNeutrals/BuRP_7B](https://huggingface.co/ChaoticNeutrals/BuRP_7B)
- [daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b) |
NotAiLOL/Med-Yi-1.5-9B | NotAiLOL | "2024-05-14T10:17:33Z" | 1,266 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:Kabatubare/medical-alpaca",
"base_model:01-ai/Yi-1.5-9B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-14T09:04:37Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: 01-ai/Yi-1.5-9B
datasets:
- Kabatubare/medical-alpaca
---
# Uploaded model
- **Developed by:** NotAiLOL
- **License:** apache-2.0
- **Finetuned from model :** 01-ai/Yi-1.5-9B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Model detail
## Prompt format
The model uses the alpaca format:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
``` |
timm/maxvit_rmlp_small_rw_224.sw_in1k | timm | "2023-05-11T00:19:51Z" | 1,265 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.01697",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-01-20T21:34:10Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for maxvit_rmlp_small_rw_224.sw_in1k
A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 64.9
- GMACs: 10.7
- Activations (M): 49.3
- Image size: 224 x 224
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_rmlp_small_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_rmlp_small_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_rmlp_small_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
mradermacher/Llama3-Aloe-8B-Alpha-GGUF | mradermacher | "2024-05-14T19:17:19Z" | 1,265 | 1 | transformers | [
"transformers",
"gguf",
"biology",
"medical",
"en",
"dataset:argilla/dpo-mix-7k",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/airoboros-3.2",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:LDJnr/Capybara",
"dataset:HPAI-BSC/CareQA",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:lukaemon/mmlu",
"dataset:bigbio/pubmed_qa",
"dataset:openlifescienceai/medmcqa",
"dataset:bigbio/med_qa",
"dataset:HPAI-BSC/better-safe-than-sorry",
"dataset:HPAI-BSC/pubmedqa-cot",
"dataset:HPAI-BSC/medmcqa-cot",
"dataset:HPAI-BSC/medqa-cot",
"base_model:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-26T20:49:47Z" | ---
base_model: HPAI-BSC/Llama3-Aloe-8B-Alpha
datasets:
- argilla/dpo-mix-7k
- nvidia/HelpSteer
- jondurbin/airoboros-3.2
- hkust-nlp/deita-10k-v0
- LDJnr/Capybara
- HPAI-BSC/CareQA
- GBaker/MedQA-USMLE-4-options
- lukaemon/mmlu
- bigbio/pubmed_qa
- openlifescienceai/medmcqa
- bigbio/med_qa
- HPAI-BSC/better-safe-than-sorry
- HPAI-BSC/pubmedqa-cot
- HPAI-BSC/medmcqa-cot
- HPAI-BSC/medqa-cot
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- biology
- medical
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF | NeverSleep | "2024-05-07T11:07:54Z" | 1,265 | 31 | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-30T00:18:41Z" | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
May 1st 2024: GGUF have been fixed with [this PR of llama.cpp](https://github.com/ggerganov/llama.cpp/pull/6920)
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains GGUF files of Lumimaid-8B-v0.1.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-GGUF) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt-GGUF) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
12thD/ko-Llama-3-8B-sft-v0.3 | 12thD | "2024-05-02T01:24:46Z" | 1,265 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"korean",
"gemma",
"conversational",
"ko",
"en",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-02T01:14:40Z" | ---
library_name: transformers
license: other
license_name: meta-llama-3-community-license-agreement
language:
- ko
- en
tags:
- korean
- gemma
pipeline_tag: text-generation
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sentence-transformers/nli-roberta-large | sentence-transformers | "2024-03-27T12:07:35Z" | 1,264 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-roberta-large
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-roberta-large')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-roberta-large')
model = AutoModel.from_pretrained('sentence-transformers/nli-roberta-large')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-roberta-large)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
SSI/gpt-2sentence-bot | SSI | "2022-05-07T01:56:17Z" | 1,264 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-05-07T01:05:16Z" | Entry not found |
TheBloke/Chinese-Llama-2-7B-GGUF | TheBloke | "2023-09-27T12:49:17Z" | 1,264 | 22 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:ziqingyang/chinese-llama-2-7b",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | "2023-09-14T17:33:45Z" | ---
license: apache-2.0
model_name: Chinese Llama 2 7B
base_model: ziqingyang/chinese-llama-2-7b
inference: false
model_creator: Ziqing Yang
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chinese Llama 2 7B - GGUF
- Model creator: [Ziqing Yang](https://huggingface.co/ziqingyang)
- Original model: [Chinese Llama 2 7B](https://huggingface.co/ziqingyang/chinese-llama-2-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Ziqing Yang's Chinese Llama 2 7B](https://huggingface.co/ziqingyang/chinese-llama-2-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF)
* [Ziqing Yang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ziqingyang/chinese-llama-2-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Ziqing Yang's Chinese Llama 2 7B](https://huggingface.co/ziqingyang/chinese-llama-2-7b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chinese-llama-2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q2_K.gguf) | Q2_K | 2 | 2.94 GB| 5.44 GB | smallest, significant quality loss - not recommended for most purposes |
| [chinese-llama-2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.07 GB| 5.57 GB | very small, high quality loss |
| [chinese-llama-2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.42 GB| 5.92 GB | very small, high quality loss |
| [chinese-llama-2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.72 GB| 6.22 GB | small, substantial quality loss |
| [chinese-llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.96 GB| 6.46 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chinese-llama-2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.99 GB| 6.49 GB | small, greater quality loss |
| [chinese-llama-2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.21 GB| 6.71 GB | medium, balanced quality - recommended |
| [chinese-llama-2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.80 GB| 7.30 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chinese-llama-2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.80 GB| 7.30 GB | large, low quality loss - recommended |
| [chinese-llama-2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.93 GB| 7.43 GB | large, very low quality loss - recommended |
| [chinese-llama-2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q6_K.gguf) | Q6_K | 6 | 5.69 GB| 8.19 GB | very large, extremely low quality loss |
| [chinese-llama-2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Chinese-Llama-2-7B-GGUF/blob/main/chinese-llama-2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.36 GB| 9.86 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Chinese-Llama-2-7B-GGUF and below it, a specific filename to download, such as: chinese-llama-2-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Chinese-Llama-2-7B-GGUF chinese-llama-2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Chinese-Llama-2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Chinese-Llama-2-7B-GGUF chinese-llama-2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m chinese-llama-2-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Chinese-Llama-2-7B-GGUF", model_file="chinese-llama-2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Ziqing Yang's Chinese Llama 2 7B
# Chinese-LLaMA-2-7B
**This is the full Chinese-LLaMA-2-7B model,which can be loaded directly for inference and full-parameter training.**
**Related models👇**
* Long context base models
* [Chinese-LLaMA-2-7B-16K (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-7b-16k)
* [Chinese-LLaMA-2-LoRA-7B-16K (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-7b-16k)
* [Chinese-LLaMA-2-13B-16K (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-13b-16k)
* [Chinese-LLaMA-2-LoRA-13B-16K (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-13b-16k)
* Base models
* [Chinese-LLaMA-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-7b)
* [Chinese-LLaMA-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-7b)
* [Chinese-LLaMA-2-13B (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-13b)
* [Chinese-LLaMA-2-LoRA-13B (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-13b)
* Instruction/Chat models
* [Chinese-Alpaca-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b)
* [Chinese-Alpaca-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-7b)
* [Chinese-Alpaca-2-13B (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b)
* [Chinese-Alpaca-2-LoRA-13B (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-13b)
# Description of Chinese-LLaMA-Alpaca-2
This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.
The main contents of this project include:
* 🚀 New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
* 🚀 Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
* 🚀 Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
* 🚀 Support for LLaMA ecosystems like 🤗transformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for details.
<!-- original-model-card end -->
|
Lewdiculous/llama3-8B-feifei-1.0-GGUF-IQ-Imatrix | Lewdiculous | "2024-06-06T18:15:33Z" | 1,264 | 1 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | "2024-06-06T17:30:47Z" | ---
license: apache-2.0
---
[Model request #40](https://huggingface.co/Lewdiculous/Model-Requests/discussions/40).
This model has a narrow use case in mind. Read the original description.

|
microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft | microsoft | "2022-12-10T10:05:51Z" | 1,263 | 0 | transformers | [
"transformers",
"pytorch",
"swinv2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-06-16T06:09:46Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer v2 (large-sized model)
Swin Transformer v2 model pre-trained on ImageNet-21k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-09883,
author = {Ze Liu and
Han Hu and
Yutong Lin and
Zhuliang Yao and
Zhenda Xie and
Yixuan Wei and
Jia Ning and
Yue Cao and
Zheng Zhang and
Li Dong and
Furu Wei and
Baining Guo},
title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
journal = {CoRR},
volume = {abs/2111.09883},
year = {2021},
url = {https://arxiv.org/abs/2111.09883},
eprinttype = {arXiv},
eprint = {2111.09883},
timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
facebook/mask2former-swin-base-coco-instance | facebook | "2023-09-07T15:38:41Z" | 1,263 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-11-28T11:57:43Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on COCO instance segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO instance segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-coco-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-coco-instance")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_instance_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). |
FreedomIntelligence/Apollo-6B | FreedomIntelligence | "2024-04-26T11:13:01Z" | 1,263 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.03640",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-06T13:06:09Z" | ---
license: apache-2.0
---
# Multilingual Medicine: Model, Dataset, Benchmark, Code
Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far
<p align="center">
👨🏻💻<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> •📃 <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> • 🌐 <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
<br> <a href="./README_zh.md"> 中文 </a> | <a href="./README.md"> English
</p>

## 🌈 Update
* **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released.
* **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is published!🎉
* **[2024.01.23]** Apollo repo is published!🎉
## Results
🤗<a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a>
🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a>

## Usage Format
User:{query}\nAssistant:{response}<|endoftext|>
## Dataset & Evaluation
- Dataset
🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a>
<details><summary>Click to expand</summary>

- [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip)
- [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train)
- Pretrain:
- data item:
- json_name: {data_source}_{language}_{data_type}.json
- data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki
- language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi)
- data_type: qa(generated qa from text)
- data_type==text: list of string
```
[
"string1",
"string2",
...
]
```
- data_type==qa: list of qa pairs(list of string)
```
[
[
"q1",
"a1",
"q2",
"a2",
...
],
...
]
```
- SFT:
- json_name: {data_source}_{language}.json
- data_type: code, general, math, medicalExam, medicalPatient
- data item: list of qa pairs(list of string)
```
[
[
"q1",
"a1",
"q2",
"a2",
...
],
...
]
```
</details>
- Evaluation
🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
<details><summary>Click to expand</summary>
- EN:
- [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test)
- [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper.
- [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- ZH:
- [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test)
- [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu)
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- [CExam](https://github.com/williamliujl/CMExam): Not used in the paper
- Randomly sample 2,000 multiple-choice questions
- ES: [Head_qa](https://huggingface.co/datasets/head_qa)
- FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)
- HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
</details>
## Results reproduction
<details><summary>Click to expand</summary>
**Waiting for Update**
</details>
## Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{wang2024apollo,
title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People},
author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang},
year={2024},
eprint={2403.03640},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
giux78/llama3-8B-usenet-merged | giux78 | "2024-04-29T19:24:02Z" | 1,263 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-29T19:20:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR-large | cambridgeltl | "2023-06-14T19:00:04Z" | 1,262 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"arxiv:2010.11784",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | ---
language: multilingual
tags:
- biomedical
- lexical-semantics
- cross-lingual
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-XLMR
SapBERT [(Liu et al. 2021)](https://arxiv.org/pdf/2010.11784.pdf) trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AB, using [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) as the base model. Please use [CLS] as the representation of the input.
#### Extracting embeddings from SapBERT
The following script converts a list of strings (entity names) into embeddings.
```python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext")
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda()
# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
padding="max_length",
max_length=25,
truncation=True,
return_tensors="pt")
toks_cuda = {}
for k,v in toks.items():
toks_cuda[k] = v.cuda()
cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding
all_embs.append(cls_rep.cpu().detach().numpy())
all_embs = np.concatenate(all_embs, axis=0)
```
For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
### Citation
```bibtex
@inproceedings{liu2021learning,
title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={Proceedings of ACL-IJCNLP 2021},
month = aug,
year={2021}
}
```
|
ncfrey/ChemGPT-4.7M | ncfrey | "2022-06-15T15:17:11Z" | 1,262 | 13 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-05-11T19:54:55Z" | ---
tags:
- chemistry
---
# ChemGPT 4.7M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
```
|
nielsr/donut-base | nielsr | "2022-08-11T10:16:37Z" | 1,262 | 3 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | "2022-07-26T08:58:30Z" | Entry not found |
nguyenvulebinh/wav2vec2-large-vi-vlsp2020 | nguyenvulebinh | "2023-02-21T08:56:01Z" | 1,262 | 2 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"vi",
"dataset:vlsp-asr-2020",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-11-04T21:32:45Z" | ---
language: vi
datasets:
- vlsp-asr-2020
tags:
- audio
- automatic-speech-recognition
license: cc-by-nc-4.0
---
## Model description
Our models use wav2vec2 architecture, pre-trained on 13k hours of Vietnamese youtube audio (un-label data) and fine-tuned on 250 hours labeled of VLSP ASR dataset on 16kHz sampled speech audio. You can find more description [here](https://github.com/nguyenvulebinh/vietnamese-wav2vec2)
## Benchmark WER result on VLSP T1 testset:
| | [base model](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vi-vlsp2020) | [large model](https://huggingface.co/nguyenvulebinh/wav2vec2-large-vi-vlsp2020) |
|---|---|---|
|without LM| 8.66 | 6.90 |
|with 5-grams LM| 6.53 | 5.32 |
## Usage
[](https://colab.research.google.com/drive/1z3FQUQ2t7nIPR-dBR4bkcee6oCDGmcd4?usp=sharing)
```python
#pytorch
#!pip install transformers==4.20.0
#!pip install https://github.com/kpu/kenlm/archive/master.zip
#!pip install pyctcdecode==0.4.0
#!pip install huggingface_hub==0.10.0
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/wav2vec2-large-vi-vlsp2020"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="t2_0000006682.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)
```
### Model Parameters License
The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode
### Contact
[email protected]
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh) |
digiplay/SXZ_Luma_v0.98VAE | digiplay | "2023-08-21T16:19:19Z" | 1,262 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-07-26T05:05:37Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/25831?modelVersionId=68200
Sample images I made :
(generated by Hugginface's API)

Original Author's DEMO images :

noEMA version is from Yntec :
https://huggingface.co/Yntec/Luma
Yntec has many cool merge models,
very recommend to use/try. 👍😄
|
timm/inception_next_tiny.sail_in1k | timm | "2023-08-24T19:00:49Z" | 1,262 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2303.16900",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-08-24T19:00:20Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for inception_next_tiny.sail_in1k
A InceptionNeXt image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.1
- GMACs: 4.2
- Activations (M): 12.0
- Image size: 224 x 224
- **Papers:**
- InceptionNeXt: When Inception Meets ConvNeXt: https://arxiv.org/abs/2303.16900
- **Original:** https://github.com/sail-sg/inceptionnext
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('inception_next_tiny.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'inception_next_tiny.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'inception_next_tiny.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{yu2023inceptionnext,
title={InceptionNeXt: when inception meets convnext},
author={Yu, Weihao and Zhou, Pan and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2303.16900},
year={2023}
}
```
|
giux78/zefiro-7b-sft-qlora-ITA-v0.5 | giux78 | "2024-02-01T08:45:26Z" | 1,262 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"it",
"dataset:giux78/100k-sft-ready-ultrafeedback-ita",
"arxiv:2310.16944",
"arxiv:2312.09993",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-25T15:32:16Z" | ---
license: apache-2.0
datasets:
- giux78/100k-sft-ready-ultrafeedback-ita
language:
- it
pipeline_tag: text-generation
---
<img src="https://hoodie-creator.s3.eu-west-1.amazonaws.com/15be78c6-original.png" alt="llamantino53" border="0" width="400px">
# Model Card for zefiro-7b-beta-ITA-v0.5
*Last Update: 11/01/2024*<br>
<!-- Provide a quick summary of what the model is/does. -->
Zefiro is a SFT fine tuned model for the Italian language based on [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) .
To create a set of open source models and datasets suited for italian language is the aim of the project and this is the first experiment.
The model can be used as base model for more specific conversationl tasks for Italian language
## Model Details
Zefiro is a porting of the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) model to the italian language using the wonderful recipes
from [alignment-handbook](https://huggingface.co/alignment-handbook) . It has also taken ispiration and insights from the [Llamantino](https://huggingface.co/swap-uniba/LLaMAntino-2-chat-7b-hf-UltraChat-ITA) model
developed by Università di Bari. For the implementation we combined different approaches from the two models mentioned but also from the wondeful communtity of open source.
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Italian
- **License:** Apache 2
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Developed by:** [giux78](https://alessandroercolani.webflow.io/)
- **Funded by:** [Business Operating System](https://www.businessos.xyz)
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed version of [UltraChat-ITA](https://huggingface.co/datasets/giux78/100k-sft-ready-ultrafeedback-ita) that is a filtered version of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
Here's how you can run the model using Transformers from 🤗 :
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "giux78/zefiro-7b-beta-ITA-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")
sys_prompt = "Sei un assistente disponibile, rispettoso e onesto. " \
"Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \
"Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
"Assicurati che le tue risposte siano socialmente imparziali e positive. " \
"Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
"Se non conosci la risposta a una domanda, non condividere informazioni false."
messages = [{ 'content' : sys_prompt, 'role' : 'assistant'},
{'content' : 'Crea una lista su cosa mangiare a pranzo ogni giorno della settimana a pranzo e cena', 'role' : 'user'}]
def generate_text(sys_prompt, user_prompt):
messages = [{ 'content' : sys_prompt, 'role' : 'assistant'},
{'content' : user_prompt, 'role' : 'user'}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
generate_text(sys_prompt, 'cosa ne pensi della politica italiana?')
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zefiro-7b-beta-ITA-v0.1 has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
We used [UltraChat-ITA](https://huggingface.co/datasets/giux78/100k-sft-ready-ultrafeedback-ita) as training data that is a filtered version of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat).
For translating the dataset we combined different tools and API we are also evaluating the best approach for translating many more datasets.
We have seen that the translation phase is critical and can introduce incorrect syntax and semantics.
#### Summary
Zefiro-7b-beta-ITA-v0.1 is finetuned version of mistral-7b using the zephyr approach for the italian language.
## Citation
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{basile2023llamantino,
title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language},
author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
year={2023},
eprint={2312.09993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Model Card Authors
[giux78](https://huggingface.co/giux78)
## Model Card Contact
**[email protected] |
yleo/EmertonMonarch-7B-slerp | yleo | "2024-02-14T15:24:17Z" | 1,262 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/Monarch-7B",
"yleo/EmertonBeagle-7B-dpo",
"base_model:mlabonne/Monarch-7B",
"base_model:yleo/EmertonBeagle-7B-dpo",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-14T15:14:51Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/Monarch-7B
- yleo/EmertonBeagle-7B-dpo
base_model:
- mlabonne/Monarch-7B
- yleo/EmertonBeagle-7B-dpo
license: cc-by-nc-4.0
---
# EmertonMonarch-7B-slerp
EmertonMonarch-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
* [yleo/EmertonBeagle-7B-dpo](https://huggingface.co/yleo/EmertonBeagle-7B-dpo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/Monarch-7B
layer_range: [0, 32]
- model: yleo/EmertonBeagle-7B-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/Monarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "yleo/EmertonMonarch-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mlabonne/Gemmalpaca-2B | mlabonne | "2024-03-04T15:18:57Z" | 1,262 | 14 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"dataset:vicgalle/alpaca-gpt4",
"base_model:google/gemma-2b",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-22T23:17:48Z" | ---
license: other
library_name: transformers
datasets:
- vicgalle/alpaca-gpt4
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
base_model:
- google/gemma-2b
model-index:
- name: Gemmalpaca-2B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 48.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 71.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.24
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 10.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Gemmalpaca-2B
name: Open LLM Leaderboard
---

# Gemmalpaca-2B
This is gemma-2b model supervised fine-tuned on the [vicgalle/alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4) dataset. It outperforms gemma-2b-it, Google's chat version, on Nous' benchmark suite.
It's mostly a test to see how fine-tuning works with Gemma models on a well-known dataset. It turned out better than expected. :)
## 🔍 Applications
This model has a context length of 8k. I recommend using it with the Alpaca chat template and NOT the Gemma Instruct template (works perfectly with LM Studio). You also want to add `</s>` as a stop token.
## ⚡ Quantized models
* **GGUF**: https://huggingface.co/mlabonne/Gemmalpaca-2B-GGUF
## 🏆 Evaluation
### Nous
Gemmalpaca-2B outperforms gemma-2b and gemma-2b-it on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) [📄](https://gist.github.com/mlabonne/4b638752fc3227df566f9562064cb864) | 38.39 | 24.48 | 51.22 | 47.02 | 30.85 |
| [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 |
| [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 |
### [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Gemmalpaca-2B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.65|
|AI2 Reasoning Challenge (25-Shot)|48.72|
|HellaSwag (10-Shot) |71.36|
|MMLU (5-Shot) |36.30|
|TruthfulQA (0-shot) |41.24|
|Winogrande (5-shot) |65.59|
|GSM8k (5-shot) |10.69|
## 🧩 Configuration
It was trained using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) with the following configuration.
```yaml
base_model: alpindale/gemma-2b
model_type: GemmaForCausalLM
tokenizer_type: GemmaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: vicgalle/alpaca-gpt4
type: alpaca
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./out
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: <s>
eos_token: </s>
unk_token: <unk>
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
opencsg/csg-wukong-1B-chat-v0.1 | opencsg | "2024-05-08T08:14:33Z" | 1,262 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-12T10:18:45Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
license: apache-2.0
---
# **csg-wukong-1B-chat-v0.1** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="900px" alt="OpenCSG" src="./csg-wukong-logo-green.jpg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
**csg-wukong-1B-chat-v0.1** was finetuned on csg-wukong-1B
<br>

## Model Evaluation results
We submitted csg-wukong-1B on the [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and
the results show our model ranked the 8th among the ~1.5B pretrained small language models.

# Training
## Hardware
- **GPUs:** 6 V100
- **Training time:** 6 hours
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSG介绍
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。
## 模型介绍
**csg-wukong-1B-chat-v0.1** 在csg-wukong-1B模型上微调而成。
<br>

## 模型评测结果
我们把csg-wukong-1B模型提交到[open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)榜单上,结果显示我们的模型目前在~1.5B小语言模型中排名第8。

# 训练
## 硬件资源
- **GPU数量:** 6 V100
- **训练时间:** 6小时
## 软件使用
- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex) |
timberrific/open-bio-med-merge | timberrific | "2024-04-27T18:42:56Z" | 1,262 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:johnsnowlabs/JSL-MedLlama-3-8B-v1.0",
"base_model:aaditya/OpenBioLLM-Llama3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-27T18:28:07Z" | ---
base_model:
- johnsnowlabs/JSL-MedLlama-3-8B-v1.0
- aaditya/OpenBioLLM-Llama3-8B
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [johnsnowlabs/JSL-MedLlama-3-8B-v1.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v1.0)
* [aaditya/OpenBioLLM-Llama3-8B](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: aaditya/OpenBioLLM-Llama3-8B
- model: johnsnowlabs/JSL-MedLlama-3-8B-v1.0
merge_method: slerp
base_model: aaditya/OpenBioLLM-Llama3-8B
dtype: float16
parameters:
t: [0.5, 0.5, 0.5, 0.5, 0.5] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
``` |
SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned | SicariusSicariiStuff | "2024-06-08T19:30:32Z" | 1,262 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-08T02:38:22Z" | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned</b>
</div>
<img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook.
# Future Plans
I plan to perform a SLERP merge with one of my other fine-tuned models, which has a bit more knowledge about Israeli topics. Additionally, I might create a larger model using MergeKit, but we'll see how it goes.
# Looking for Sponsors
Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster.
# Papers?
Maybe. We'll see. No promises here 🤓
# Contact Details
I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone.
# Versions and QUANTS
- Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha)
- Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF)
# Model architecture
Based on Mistral 7B. I didn't even bother to alter the tokenizer.
# The recommended prompt setting is Debug-deterministic:
```
temperature: 1
top_p: 1
top_k: 1
typical_p: 1
min_p: 1
repetition_penalty: 1
```
# The recommended instruction template is Mistral:
```
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- message['content'] -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}}
{%- else -%}
{{-'' + message['content'] + '</s>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-''-}}
{%- endif -%}
```
# English to hebrew example:
<div align="center">
<b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b>
</div>
<img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;">
# English to hebrew example:
<div align="center">
<b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b>
</div>
<img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;">
<div align="center">
<b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b>
[](https://www.youtube.com/watch?v=YYKeovnS0do)
</div>
<div align="center">
<b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b>
[](https://www.youtube.com/watch?v=DQFtx8M2txc)
</div>
<div align="center">
<b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b>
[](https://www.youtube.com/watch?v=w5fz3Ot6tH8)
</div>
### History
The model was originally trained about 2 month after Mistral (v0.1) was released.
As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score)
<img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;">
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
|
beomi/KoAlpaca-llama-1-7b | beomi | "2023-03-21T09:29:45Z" | 1,261 | 27 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"alpaca",
"KoAlpaca",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-03-17T15:20:05Z" | ---
license: apache-2.0
language:
- ko
- en
pipeline_tag: text-generation
tags:
- alpaca
- llama
- KoAlpaca
---
# KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)
- More informations at https://github.com/Beomi/KoAlpaca
- This repository contains finetuned KoAlpaca model weights based on LLAMA |
Jayant9928/orpo_v2 | Jayant9928 | "2024-05-21T12:43:36Z" | 1,261 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-30T12:55:12Z" | ---
license: apache-2.0
---
Model Card for Model ID
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
Developed by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Model type: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Finetuned from model [optional]: [More Information Needed]
Model Sources [optional]
Repository: [More Information Needed]
Paper [optional]: [More Information Needed]
Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact |
VlSav/saiga_llama3_8b_sft_m11_d7-Q6_K-GGUF | VlSav | "2024-06-29T09:13:10Z" | 1,261 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:IlyaGusev/saiga_llama3_8b_sft_m11_d7",
"region:us"
] | null | "2024-06-29T09:12:42Z" | ---
base_model: IlyaGusev/saiga_llama3_8b_sft_m11_d7
tags:
- llama-cpp
- gguf-my-repo
---
# VlSav/saiga_llama3_8b_sft_m11_d7-Q6_K-GGUF
This model was converted to GGUF format from [`IlyaGusev/saiga_llama3_8b_sft_m11_d7`](https://huggingface.co/IlyaGusev/saiga_llama3_8b_sft_m11_d7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IlyaGusev/saiga_llama3_8b_sft_m11_d7) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo VlSav/saiga_llama3_8b_sft_m11_d7-Q6_K-GGUF --hf-file saiga_llama3_8b_sft_m11_d7-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo VlSav/saiga_llama3_8b_sft_m11_d7-Q6_K-GGUF --hf-file saiga_llama3_8b_sft_m11_d7-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo VlSav/saiga_llama3_8b_sft_m11_d7-Q6_K-GGUF --hf-file saiga_llama3_8b_sft_m11_d7-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo VlSav/saiga_llama3_8b_sft_m11_d7-Q6_K-GGUF --hf-file saiga_llama3_8b_sft_m11_d7-q6_k.gguf -c 2048
```
|
qnixsynapse/llama-2-chat-13b-gguf | qnixsynapse | "2023-08-26T14:36:41Z" | 1,260 | 0 | null | [
"gguf",
"en",
"license:llama2",
"region:us"
] | null | "2023-08-26T14:02:18Z" | ---
license: llama2
language:
- en
---
GGUF File format for llama-2-chat-13b models from Meta AI.
# Quantization:
Currently only 2 quants are available in my repository:
|filename| quantization| size|
|--------|-------------|-----|
|ggml-llama-2-13b-chat-q4_k_m.gguf| Q4_K_M| 7.8GB|
|ggml-llama-2-13b-chat-f16.gguf|f16|26GB|
License subject to Meta's original license agreement. |
athirdpath/Harmonia-20B | athirdpath | "2023-11-26T12:14:46Z" | 1,260 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-25T21:21:49Z" | ---
license: cc-by-nc-4.0
---
## Description
This is a unified 20B model crafted via a multi-step SLERP merge of eight 20B models. The aim was to develop a versatile "base model" for TaskArithmetic in this size class.
## Merging Process:

## Models:
- model: Undi95/Emerhyst-20B
- model: Undi95/MXLewd-L2-20B
- model: Undi95/Lewd-Sydney-20B
- model: athirdpath/Nethena-20b-Glued
- model: tavtav/Rose-20B
- model: Undi95/PsyMedRP-v1-20B
- model: NeverSleep/Noromaid-20b-v0.1.1
- model: Undi95/U-Amethyst-20B
## Concept:
The idea behind this process was to blend the unique attributes of each model while minimizing individual quirks. This approach has also shown promising results as a standalone RP model, providing a combination of high-quality writing and situational problem-solving/awareness.
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Thanks to Undi95 for pioneering the 20B recipe, and for most of the models involved. |
urchade/gliner_large_bio-v0.1 | urchade | "2024-04-09T21:22:25Z" | 1,260 | 6 | gliner | [
"gliner",
"pytorch",
"biology",
"medical",
"token-classification",
"en",
"dataset:urchade/pubmed-ner-mistral-v0.1",
"license:apache-2.0",
"region:us"
] | token-classification | "2024-03-26T10:01:34Z" | ---
license: apache-2.0
pipeline_tag: token-classification
tags:
- biology
- medical
language:
- en
library_name: gliner
datasets:
- urchade/pubmed-ner-mistral-v0.1
---
This version is trained with "urchade/pubmed-ner-mistral-v0.1" which is commercially permissible.
This version can work for many domains, but better than other GLiNER models in the biomedical domain. Use with threshold=0.3 for better performance. |
kishorea/Llama3_medqa | kishorea | "2024-04-29T09:37:02Z" | 1,260 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-29T09:10:40Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aus-396/merged-13b | aus-396 | "2024-06-23T04:30:14Z" | 1,260 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | "2024-06-23T04:20:56Z" | ---
license: mit
---
|
seyonec/SMILES_tokenized_PubChem_shard00_160k | seyonec | "2021-05-20T21:08:23Z" | 1,259 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | Entry not found |
skumar9/Llama-medx_v0 | skumar9 | "2024-04-30T03:59:59Z" | 1,259 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-29T18:48:49Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
skumar9/Llama-medx_v2 | skumar9 | "2024-04-30T03:56:14Z" | 1,259 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"conversational",
"dataset:skumar9/orpo-mmlu",
"arxiv:2305.14314",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-29T20:49:25Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- skumar9/orpo-mmlu
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is llama3 8b family chat model finetuned from base [`epfl-llm/meditron-7b`](https://huggingface.co/epfl-llm/meditron-7b) with [open assist dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2) using SFT [QLora](https://arxiv.org/abs/2305.14314) .<br>
All the linear parameters were made trainable with a rank of 16.<br>
# Prompt template: Llama
```
'<s> [INST] <<SYS>>
You are a helpful, respectful and medical honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>> {question} [/INST] {Model answer } </s>'
```
# Usage:
```python
model_name='jiviadmin/meditron-7b-guanaco-chat'
# Load the model
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map={"": 0},
)
# Load tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True,add_eos_token=True)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token_id = 18610
tokenizer.padding_side = "right"
default_system_prompt="You are a helpful, respectful and honest medical assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.Please consider the context below if applicable:
Context:NA"
#Initialize the hugging face pipeline
def format_prompt(question):
return f'''<s> [INST] <<SYS>> {default_system_prompt} <</SYS>> [INST] {question} [/INST]'''
question=' My father has a big white colour patch inside of his right cheek. please suggest a reason.'
pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=512,repetition_penalty=1.1,return_full_text=False)
result = pipe(format_prompt(question))
answer=result[0]['generated_text']
print(answer)
```
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
unicamp-dl/translation-pt-en-t5 | unicamp-dl | "2021-10-11T03:47:04Z" | 1,258 | 22 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"en",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | translation | "2022-03-02T23:29:05Z" | ---
language:
- en
- pt
datasets:
- EMEA
- ParaCrawl 99k
- CAPES
- Scielo
- JRC-Acquis
- Biomedical Domain Corpora
tags:
- translation
metrics:
- bleu
---
# Introduction
This repository brings an implementation of T5 for translation in PT-EN tasks using a modest hardware setup. We propose some changes in tokenizator and post-processing that improves the result and used a Portuguese pretrained model for the translation. You can collect more informations in [our repository](https://github.com/unicamp-dl/Lite-T5-Translation). Also, check [our paper](https://aclanthology.org/2020.wmt-1.90.pdf)!
# Usage
Just follow "Use in Transformers" instructions. It is necessary to add a few words before to define the task to T5.
You can also create a pipeline for it. An example with the phrase " Eu gosto de comer arroz" is:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/translation-pt-en-t5")
model = AutoModelForSeq2SeqLM.from_pretrained("unicamp-dl/translation-pt-en-t5")
pten_pipeline = pipeline('text2text-generation', model=model, tokenizer=tokenizer)
pten_pipeline("translate Portuguese to English: Eu gosto de comer arroz.")
```
# Citation
```bibtex
@inproceedings{lopes-etal-2020-lite,
title = "Lite Training Strategies for {P}ortuguese-{E}nglish and {E}nglish-{P}ortuguese Translation",
author = "Lopes, Alexandre and
Nogueira, Rodrigo and
Lotufo, Roberto and
Pedrini, Helio",
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.wmt-1.90",
pages = "833--840",
}
``` |
Yntec/m0nst3rfy3 | Yntec | "2024-06-15T16:29:37Z" | 1,258 | 4 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Monsters",
"Comic",
"LDWorksDervlex",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-29T21:20:49Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Monsters
- Comic
- LDWorksDervlex
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# M0nst3rfy
Use m0nst3rfy3 in your prompt for a stronger effect.
DucHaitenLofi R with the M0nst3rfy LORA by LDWorksDervlex trained in and the BlessedV2 VAE baked in.
Comparison:

((best quality)), ((masterpiece)), (detailed), absurdres, HDR, cowboy shot, full body, EEVEE, VAPOREON, slightly underwater, sitting on rock, sunlight, big smile, tail up, detailed skin texture, subsurface scattering, splashing, blush, goosebumps
(Click for larger)
Samples and a prompt by digiplay:

(Click for larger)
Top left: Cut rabbit.
Top right (unknown prompt):
Bottom left (unknown prompt):
Bottom right (unknown prompt):
Original pages:
https://civitai.com/models/68037?modelVersionId=72706 (M0nst3rfy LoRA)
https://civitai.com/models/65706?modelVersionId=70363 (DucHaitenLofi R)
# Recipe
- SuperMerger Merge LoRA to checkpoint:
Model A: DucHaitenLofi R
LoRA: M0nst3rfy
Output: LofiMonsters
- SuperMerger Add Difference Train Difference:
Model A: DucHaitenLofi R
Model B: LofiMonsters
Model C: DucHaitenLofi R
Output: m0nst3rfy3NoVAE
- Bake Blessed2VAE
Output: m0nst3rfy3 |
PrunaAI/dolphin-2.9-llama3-70b-GGUF-smashed | PrunaAI | "2024-04-26T16:50:01Z" | 1,258 | 3 | null | [
"gguf",
"pruna-ai",
"base_model:cognitivecomputations/dolphin-2.9-llama3-70b",
"region:us"
] | null | "2024-04-26T02:25:36Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-2.9-llama3-70b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the cognitivecomputations/dolphin-2.9-llama3-70b model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/dolphin-2.9-llama3-70b-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/dolphin-2.9-llama3-70b-GGUF-smashed dolphin-2.9-llama3-70b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/dolphin-2.9-llama3-70b-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/dolphin-2.9-llama3-70b-GGUF-smashed dolphin-2.9-llama3-70b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m dolphin-2.9-llama3-70b.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./dolphin-2.9-llama3-70b.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./dolphin-2.9-llama3-70b.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
guoyww/animatediff-motion-lora-rolling-anticlockwise | guoyww | "2023-11-03T13:06:58Z" | 1,257 | 1 | diffusers | [
"diffusers",
"safetensors",
"animatediff",
"text-to-video",
"region:us"
] | text-to-video | "2023-11-03T13:06:58Z" | ---
library_name: diffusers
pipeline_tag: text-to-video
tags:
- animatediff
---
# Motion LoRAs
Motion LoRAs allow adding specific types of motion to your animations.

Currently the following types of motion are available for models using the `guoyww/animatediff-motion-adapter-v1-5-2` checkpoint.
- Zoom In/Out
- Pan Left/Right
- Tilt Up/Down
- Rolling Clockwise/Anticlockwise
Please refer to the [AnimateDiff documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/animatediff) for information on how to use these Motion LoRAs. |
IBI-CAAI/MELT-TinyLlama-1.1B-Chat-v1.0 | IBI-CAAI | "2024-01-10T23:57:13Z" | 1,257 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-10T23:45:07Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
---
# Model Card MELT-TinyLlama-1.1B-Chat-v1.0
The MELT-TinyLlama-1.1B-Chat-v1.0 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.
MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates a 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 medical benchmarks including, USMLE, Indian AIIMS, and NEET medical examination examples.
## Model Details
The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Center for Applied AI](https://caai.ai.uky.edu/)
- **Funded by:** [Institute or Biomedical Informatics](https://www.research.uky.edu/IBI)
- **Model type:** LLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
## Uses
MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
MELT is intended for research purposes only and should not be used for medical advice.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
MELT was training using collections publicly available, which likely contain biased and inaccurate information. The training and evaluation datasets have not been evaluated for content or accuracy.
## How to Get Started with the Model
Use this model like you would any llama-2-7b-chat-hf model.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The following datasets were used for training:
[Expert Med](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/Q3A969)
[MedQA train](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA train](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[LiveQA](https://github.com/abachaa/LiveQA_MedicalTask_TREC2017)
[MedicationQA](https://huggingface.co/datasets/truehealth/medicationqa)
[MMLU clinical topics](https://huggingface.co/datasets/Stevross/mmlu)
[Medical Flashcards](https://huggingface.co/datasets/medalpaca/medical_meadow_medical_flashcards)
[Wikidoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc)
[Wikidoc Patient Information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information)
[MEDIQA](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
[MMMLU](https://huggingface.co/datasets/medalpaca/medical_meadow_mmmlu)
[icliniq 10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing)
[HealthCare Magic 100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing)
[GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing)
[Mental Health Conversational](https://huggingface.co/datasets/heliosbrahma/mental_health_conversational_dataset)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- **Lora Rank:** 64
- **Lora Alpha:** 16
- **Lora Targets:** "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj"
- **LR:** 2e-4
- **Epoch:** 3
- **Precision:** bf16 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates an average 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.
### TinyLlama-1.1B-Chat-v1.0
- **medqa:** {'base': {'Average': 25.49, 'STEP-1': 24.48, 'STEP-2&3': 26.64}}
- **mausmle:** {'base': {'Average': 19.71, 'STEP-1': 21.18, 'STEP-2': 20.69, 'STEP-3': 17.76}}
- **medmcqa:** {'base': {'Average': 28.52, 'MEDICINE': 29.35, 'OPHTHALMOLOGY': 28.57, 'ANATOMY': 30.82, 'PATHOLOGY': 29.07, 'PHYSIOLOGY': 20.45, 'DENTAL': 30.09, 'RADIOLOGY': 14.29, 'BIOCHEMISTRY': 22.31, 'ANAESTHESIA': 26.09, 'GYNAECOLOGY': 24.84, 'PHARMACOLOGY': 32.02, 'SOCIAL': 31.11, 'PEDIATRICS': 31.82, 'ENT': 28.95, 'SURGERY': 31.45, 'MICROBIOLOGY': 26.03, 'FORENSIC': 16.28, 'PSYCHIATRY': 22.22, 'SKIN': 40.0, 'ORTHOPAEDICS': 21.43, 'UNKNOWN': 0.0}}
- **average:** 24.57%
### MELT-TinyLlama-1.1B-Chat-v1.0
- **medqa:** {'base': {'Average': 29.5, 'STEP-1': 28.17, 'STEP-2&3': 31.03}}
- **mausmle:** {'base': {'Average': 21.51, 'STEP-1': 27.06, 'STEP-2': 19.54, 'STEP-3': 18.69}}
- **medmcqa:** {'base': {'Average': 32.84, 'MEDICINE': 27.72, 'OPHTHALMOLOGY': 38.1, 'ANATOMY': 39.73, 'PATHOLOGY': 32.56, 'PHYSIOLOGY': 35.61, 'DENTAL': 32.23, 'RADIOLOGY': 41.07, 'BIOCHEMISTRY': 33.06, 'ANAESTHESIA': 39.13, 'GYNAECOLOGY': 22.88, 'PHARMACOLOGY': 32.58, 'SOCIAL': 26.67, 'PEDIATRICS': 34.09, 'ENT': 42.11, 'SURGERY': 33.47, 'MICROBIOLOGY': 30.14, 'FORENSIC': 41.86, 'PSYCHIATRY': 55.56, 'SKIN': 60.0, 'ORTHOPAEDICS': 35.71, 'UNKNOWN': 100.0}}
- **average:** 27.95%
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[MedQA test](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA test](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[MA USMLE](https://huggingface.co/datasets/medalpaca/medical_meadow_usmle_self_assessment)
## Disclaimer:
The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.
|
duyntnet/Mistral-7B-OpenOrca-imatrix-GGUF | duyntnet | "2024-05-11T09:42:45Z" | 1,257 | 1 | transformers | [
"transformers",
"gguf",
"imatrix",
"Mistral-7B-OpenOrca",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-05-11T07:21:30Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Mistral-7B-OpenOrca
---
Quantizations of https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca
# From original readme
## Prompt Template
We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this.
This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML.
This formatting is also available via a pre-defined [Transformers chat template](https://huggingface.co/docs/transformers/main/chat_templating),
which means that lists of messages can be formatted for you with the `apply_chat_template()` method:
```python
chat = [
{"role": "system", "content": "You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!"}
{"role": "user", "content": "How are you?"},
{"role": "assistant", "content": "I am doing well!"},
{"role": "user", "content": "Please tell me about how mistral winds have attracted super-orcas."},
]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
which will yield:
```
<|im_start|>system
You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!
<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
Please tell me about how mistral winds have attracted super-orcas.<|im_end|>
<|im_start|>assistant
```
If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized
and formatted conversation ready to pass to `model.generate()`.
## Inference
See [this notebook](https://colab.research.google.com/drive/1yZlLSifCGELAX5GN582kZypHCv0uJuNX?usp=sharing) for inference details.
Note that you need the development snapshot of Transformers currently, as support for Mistral hasn't been released into PyPI yet:
```
pip install git+https://github.com/huggingface/transformers
``` |
jfer1015/Codestral-22B-v0.1-Q4_K_M-GGUF | jfer1015 | "2024-06-24T01:57:05Z" | 1,257 | 0 | null | [
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Codestral-22B-v0.1",
"license:other",
"region:us"
] | null | "2024-06-24T01:56:09Z" | ---
base_model: mistralai/Codestral-22B-v0.1
language:
- code
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
- llama-cpp
- gguf-my-repo
inference: false
---
# jfer1015/Codestral-22B-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Codestral-22B-v0.1`](https://huggingface.co/mistralai/Codestral-22B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Codestral-22B-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jfer1015/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jfer1015/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jfer1015/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jfer1015/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
|
samdd00/btxml_v1 | samdd00 | "2024-06-25T18:03:55Z" | 1,257 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | "2024-06-25T17:53:27Z" | ---
license: mit
---
|
Yale-LILY/brio-cnndm-uncased | Yale-LILY | "2022-11-15T18:19:02Z" | 1,256 | 6 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"text-2-text-generation",
"arxiv:2203.16804",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-30T23:35:54Z" | ---
tags:
- text-2-text-generation
- bart
---
# Model Card for brio-cnndm-uncased
# Model Details
## Model Description
Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary. This assumption may lead to performance degradation during inference, where the model needs to compare several system-generated (candidate) summaries that have deviated from the reference summary. To address this problem, we propose a novel training paradigm which assumes a non-deterministic distribution so that different candidate summaries are assigned probability mass according to their quality.
- **Developed by:** Yale LILY Lab
- **Shared by [Optional]:** Yale LILY Lab
- **Model type:** Text2Text Generation
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** BART
- **Resources for more information:**
- [Github Repo](https://github.com/Yale-LILY/BRIO)
- [Associated Paper](https://arxiv.org/abs/2203.16804)
# Uses
## Direct Use
This model can be used for the task of Text2Text Generation
## Downstream Use [Optional]
Further analysis also shows that our model can estimate probabilities of candidate summaries that are more correlated with their level of quality.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model creators note in the [associated paper](https://arxiv.org/abs/2203.16804):
> CNNDM4: is a large scale news dataset.
Nallapati et al: we treat the news articles as the source documents and the associated highlights as the summaries.
XSum5: is a highly abstractive dataset of articles from the British Broadcasting Corporation (BBC). NYT6: contains articles from the New York Times and the associated summaries
## Training Procedure
### Preprocessing
The model creators note in the [associated paper](https://arxiv.org/abs/2203.16804):
> We follow Kedzie et al. (2018) for data preprocessing and splitting, and use the associated archival abstracts as the summaries
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
### CNNDM
| | ROUGE-1 | ROUGE-2 | ROUGE-L |
|----------|---------|---------|---------|
| BART | 44.16 | 21.28 | 40.90 |
| Ours | 47.78 | 23.55 | 44.57 |
# Model Examination
The model creators note in the [associated paper](https://arxiv.org/abs/2203.16804):
> We attribute BRIO-Ctr’s superior performance to its use of the same model architecture (BART) for both candidate generation and scoring, while SimCLS uses RoBERTa as the evaluation model. As a result, BRIO-Ctr maximizes the parameter sharing between the two stages, and preserves the power of the Seq2Seq model pre-trained on the same dataset.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
The model creators note in the [associated paper](https://arxiv.org/abs/2203.16804):
> Formulate summarization as a sequence-to-sequence (Seq2Seq) problem
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
```bibtex
@misc{mesh-transformer-jax,
@misc{https://doi.org/10.48550/arxiv.2203.16804,
doi = {10.48550/ARXIV.2203.16804},
url = {https://arxiv.org/abs/2203.16804},
author = {Liu, Yixin and Liu, Pengfei and Radev, Dragomir and Neubig, Graham},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BRIO: Bringing Order to Abstractive Summarization},
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Yale LILY Lab in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Yale-LILY/brio-cnndm-uncased")
model = AutoModelForSeq2SeqLM.from_pretrained("Yale-LILY/brio-cnndm-uncased")
```
</details>
|
TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ | TheBloke | "2024-01-31T22:20:55Z" | 1,256 | 46 | trl | [
"trl",
"safetensors",
"mistral",
"distilabel",
"dpo",
"rlaif",
"rlhf",
"en",
"dataset:argilla/dpo-mix-7k",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | null | "2024-01-31T21:50:39Z" | ---
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
datasets:
- argilla/dpo-mix-7k
inference: false
language:
- en
library_name: trl
license: apache-2.0
model_creator: Argilla
model_name: CapyBaraHermes 2.5 Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- distilabel
- dpo
- rlaif
- rlhf
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CapyBaraHermes 2.5 Mistral 7B - GPTQ
- Model creator: [Argilla](https://huggingface.co/argilla)
- Original model: [CapyBaraHermes 2.5 Mistral 7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Argilla's CapyBaraHermes 2.5 Mistral 7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF)
* [Argilla's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `CapybaraHermes-2.5-Mistral-7B-GPTQ`:
```shell
mkdir CapybaraHermes-2.5-Mistral-7B-GPTQ
huggingface-cli download TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ --local-dir CapybaraHermes-2.5-Mistral-7B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir CapybaraHermes-2.5-Mistral-7B-GPTQ
huggingface-cli download TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir CapybaraHermes-2.5-Mistral-7B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir CapybaraHermes-2.5-Mistral-7B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ --local-dir CapybaraHermes-2.5-Mistral-7B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `CapybaraHermes-2.5-Mistral-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Argilla's CapyBaraHermes 2.5 Mistral 7B
# CapybaraHermes-2.5-Mistral-7B
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/Vmr0FtTvnny6Snm-UDM_n.png">
</div>
<p align="center">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
This model is the launching partner of the [capybara-dpo dataset](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-9k-binarized) build with ⚗️ distilabel. It's a preference tuned [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B).
CapybaraHermes has been preference tuned with LoRA and TRL for 3 epochs using argilla's [dpo mix 7k](https://huggingface.co/datasets/argilla/dpo-mix-7k).
To test the impact on multi-turn performance we have used MTBench. We also include the Nous Benchmark results and Mistral-7B-Instruct-v0.2 for reference as it's a strong 7B model on MTBench:
| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | MTBench First Turn | MTBench Second Turn | Nous avg. | MTBench avg. |
|-----------------------------------|---------|---------|------------|----------|------------|-------------|-----------|--------------|
| argilla/CapybaraHermes-2.5-Mistral-7B | **43.8** | **73.35** | 57.07 | **42.44** | 8.24375 | **7.5625** | 54.16 | **7.903125** |
| teknium/OpenHermes-2.5-Mistral-7B | 42.75 | 72.99 | 52.99 | 40.94 | **8.25** | 7.2875 | 52.42 | 7.76875 |
| Mistral-7B-Instruct-v0.2 | 38.5 | 71.64 | **66.82** | 42.29 | 7.8375 | 7.1 | **54.81** | 7.46875 |
The most interesting aspect in the context of the capybara-dpo dataset is the increased performance in MTBench Second Turn scores.
For the merge lovers, we also preference tuned Beagle14-7B with a mix of capybara-dpo and distilabel orca pairs using the same recipe as NeuralBeagle (see [ YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) for reference):
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|------------------------------------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)| 45.29| 76.92| 71.66| 48.78| 60.66|
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Argilla
- **Shared by [optional]:** Argilla
- **Model type:** 7B chat model
- **Language(s) (NLP):** English
- **License:** Same as OpenHermes
- **Finetuned from model [optional]:** [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
|
nerijs/pixelportraits192-XL-v1.0 | nerijs | "2024-03-26T04:24:10Z" | 1,256 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-image | "2024-03-26T04:21:05Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
A character with intricate brass goggles perched on their forehead, wearing
a Victorian-era jacket adorned with gears and copper wires. They have a
mechanical arm made of polished brass and intricate clockwork, holding a
glowing, etheric device.
output:
url: images/pixel-0025-706806263.png
- text: >-
An elf with long, silver hair flowing down their back, pointed ears peeking
through. They wear a green cloak over leather armor, a quiver of arrows
slung over their shoulder, and a longbow in hand. Their eyes gleam with an
ethereal light, suggesting a deep connection to nature.
<lora:pixelportraits192-v1-2151:0.75>
output:
url: images/pixel-0055-1623441747.png
- text: a cute corgi
output:
url: images/00013-3539920073.png
- text: >-
a man with a rugged appearance, with a scar running down over one eye. They
wear a red coat over a black space suit, a laser pistol holstered at their
side, and a cybernetic eye gleaming with data streams. Their stance exudes
confidence and defiance.
output:
url: images/pixel-0065-3592199040.png
- text: >-
This merperson has shimmering scales covering their tail, which gradients
from deep blue to aquamarine. They wield a trident made of coral and pearl,
and their hair floats around their head like seaweed. Their arms are adorned
with shell armor, and their eyes reflect the ocean's depths.
output:
url: images/pixel-0073-2765702999.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: cc-by-nc-4.0
---
# PixelPortraits192 XL v1.0
<Gallery />
## Tips
- Use it at **1536x1536** or similar
- Best results at 0.75
- Works great with long and detailed prompts
- No trigger token required
- Works even better with upscaling/high-res fix
## Download model
Weights for this model are available in Safetensors format.
[Download](/nerijs/pixelportraits192-XL-v1.0/tree/main) them in the Files & versions tab.
|
Jayant9928/tnayaj | Jayant9928 | "2024-04-26T09:17:02Z" | 1,256 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-26T08:19:17Z" | ---
license: apache-2.0
---
The tnayaj-8B model is an innovative open-source language model specifically engineered for the biomedical domain. Crafted by Jayant AI Labs, this model harnesses state-of-the-art methodologies to achieve unparalleled performance across various biomedical tasks.
🏥 Specialization in medicine: tnayaj-8B caters to the intricate linguistic and informational demands of the medical and life sciences realms. Its refinement stems from extensive training on a comprehensive biomedical dataset, enabling precise and articulate text generation within the domain.
🎓 Exceptional Performance: Boasting a staggering 8 billion parameters
🧠 Advanced Training Methodologies: tnayaj-8B builds upon the foundational prowess of the Meta-Llama-3-8B-Instruct .It integrates the DPO dataset and a tailored array of medical instruction data for refinement. Central to its training regimen are meticulously curated components, including: |
Alphacode-AI/AlphaMist7B-slr-v4-slow2 | Alphacode-AI | "2024-05-13T01:42:31Z" | 1,256 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-13T01:02:05Z" | ---
license: cc-by-4.0
---
This model is a version of mistralai/Mistral-7B-v0.1 that has been fine-tuned with Our In House CustomData.
Train Spec : We utilized an A100x4 * 1 for training our model with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate |
moussaKam/AraBART | moussaKam | "2022-05-05T13:17:29Z" | 1,255 | 12 | transformers | [
"transformers",
"pytorch",
"mbart",
"feature-extraction",
"summarization",
"bart",
"fill-mask",
"ar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-09T10:05:16Z" | ---
tags:
- summarization
- bart
language:
- ar
widget:
- text: بيروت هي عاصمة <mask>.
license: apache-2.0
pipeline_tag: "fill-mask"
---
AraBART is the first Arabic model in which the encoder and the decoder are pretrained end-to-end, based on BART. AraBART follows the architecture of BART-Base
which has 6 encoder and 6 decoder layers and 768 hidden dimensions. In total AraBART has 139M parameters.
AraBART achieves the best performance on multiple abstractive summarization datasets, outperforming strong baselines including a pretrained Arabic BERT-based models and multilingual mBART and mT5 models.
|
PlanTL-GOB-ES/es_anonimization_core_lg | PlanTL-GOB-ES | "2023-01-11T13:34:51Z" | 1,255 | 1 | spacy | [
"spacy",
"token-classification",
"es",
"ca",
"license:mit",
"model-index",
"region:us"
] | token-classification | "2023-01-11T13:28:57Z" | ---
tags:
- spacy
- token-classification
language:
- es
- ca
license: mit
model-index:
- name: ca_anonimization_core_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.746031746
- name: NER Recall
type: recall
value: 0.6438356164
- name: NER F Score
type: f_score
value: 0.6911764706
widget:
- text: "La matrícula del coche es 8560 JXK y el nombre del propietario es Jon Permanyer Ugartemendia, DNI 362-69-58-6n. Tel: 628539864. Calle Pasteur 46 Bajos, 08024 Barcelona"
---
This is a Spacy multilingual (Catalan & Spanish) anonimization model, for use with BSC's AnonymizationPipeline at:
https://github.com/TeMU-BSC/AnonymizationPipeline.
pip install https://huggingface.co/PlanTL-GOB-ES/es_anonimization_core_lg/resolve/main/es_anonimization_core_lg-any-py3-none-any.whl
The anonymization pipeline is a library for performing sensitive data identification and ultimately anonymization of the detected data in Spanish and Catalan user generated plain text.
This is not a standalone model and is meant to work within the pipeline.
The model can detect the following entities: `EMAIL`, `FINANCIAL`, `ID`, `LOC`, `MISC`, `ORG`, `PER`, `TELEPHONE`, `VEHICLE`, `ZIP`
| Feature | Description |
| --- | --- |
| **Name** | `ca_anonimization_core_lg` |
| **Version** | `1.0.0` |
| **spaCy** | `>=3.2.3,<4.0.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Joaquin Silveira](https://github.com/TeMU-BSC/AnonymizationPipeline) |
### Label Scheme
<details>
<summary>View label scheme (322 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumForm=Digit\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Comm`, `POS=AUX\|VerbForm=Inf`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Peri`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `POS=SCONJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=SYM`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|Polarity=Neg`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Ind`, `POS=PUNCT`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Ind`, `POS=AUX`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=VERB`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Semi`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Dash`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Colo`, `Gender=Masc\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Quot`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `POS=VERB`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `POS=X`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Dem`, `POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `NumForm=Digit\|NumType=Ord\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=PRON\|PronType=Int`, `Foreign=Yes\|Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Foreign=Yes\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Comm`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Comm`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `AdvType=Tim\|Degree=Cmp\|POS=ADV`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Pre\|PronType=Prs`, `POS=DET\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `POS=INTJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=SCONJ`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=SYM`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=ADV`, `POS=PUNCT\|PunctSide=Fin\|Punta d'aignctType=Brck`, `Degree=Cmp\|POS=ADJ`, `AdvType=Tim\|POS=SYM`, `Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `EMAIL`, `FINANCIAL`, `ID`, `LOC`, `MISC`, `ORG`, `PER`, `TELEPHONE`, `VEHICLE`, `ZIP` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 69.12 |
| `ENTS_P` | 74.60 |
| `ENTS_R` | 64.38 |
| `NER_LOSS` | 26573.78 |
|
wojtab/llava-13b-v0-4bit-128g | wojtab | "2023-05-01T17:11:16Z" | 1,255 | 75 | transformers | [
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-21T18:48:45Z" | 4-bit quant of llama part of llava https://github.com/haotian-liu/LLaVA https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0
quantized by:
```
CUDA_VISIBLE_DEVICES=0 python llama.py /workspace/LLaVA-13B-v0/ c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors llava-13b-v0-4bit-128g.safetensors
```
on https://github.com/oobabooga/GPTQ-for-LLaMa CUDA branch of GPTQ (commit `57a2629`)
YOU CAN NOW RUN IT IN [TEXT-GENERATION-WEBUI](https://github.com/oobabooga/text-generation-webui) with `llava` extension (see: https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava)
---
license: other
---
|
TheBloke/em_german_13b_v01-GGUF | TheBloke | "2023-10-01T21:15:57Z" | 1,255 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"facebook",
"meta",
"pytorch",
"llama-2",
"german",
"deutsch",
"text-generation",
"de",
"base_model:jphme/em_german_13b_v01",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-01T21:05:11Z" | ---
base_model: jphme/em_german_13b_v01
inference: false
language:
- de
library_name: transformers
license: llama2
model_creator: Jan Philipp Harries
model_name: EM German 13B v01
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- german
- deutsch
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# EM German 13B v01 - GGUF
- Model creator: [Jan Philipp Harries](https://huggingface.co/jphme)
- Original model: [EM German 13B v01](https://huggingface.co/jphme/em_german_13b_v01)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jan Philipp Harries's EM German 13B v01](https://huggingface.co/jphme/em_german_13b_v01).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF)
* [Jan Philipp Harries's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jphme/em_german_13b_v01)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: EmGerman
```
Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [em_german_13b_v01.Q2_K.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [em_german_13b_v01.Q3_K_S.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [em_german_13b_v01.Q3_K_M.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [em_german_13b_v01.Q3_K_L.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [em_german_13b_v01.Q4_0.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [em_german_13b_v01.Q4_K_S.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [em_german_13b_v01.Q4_K_M.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [em_german_13b_v01.Q5_0.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [em_german_13b_v01.Q5_K_S.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [em_german_13b_v01.Q5_K_M.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [em_german_13b_v01.Q6_K.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [em_german_13b_v01.Q8_0.gguf](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF/blob/main/em_german_13b_v01.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/em_german_13b_v01-GGUF and below it, a specific filename to download, such as: em_german_13b_v01.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/em_german_13b_v01-GGUF em_german_13b_v01.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/em_german_13b_v01-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/em_german_13b_v01-GGUF em_german_13b_v01.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m em_german_13b_v01.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/em_german_13b_v01-GGUF", model_file="em_german_13b_v01.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jan Philipp Harries's EM German 13B v01

# EM German 13b
([Zur deutschen Version](#deutsch-em-german))
**EM German (v01)** is an experimental llama2-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.
This is the 13b llama2 version.
# Links & Demos
We will publish further instructions, updates and code-snippets in the project's [Github-Repo](https://github.com/jphme/EM_German).
## Model Links
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc |
| [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
## Demos:
You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):
* [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* Further Examples for LLM apps will follow soon...
# Prompt Format
This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:
```
Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:
```
You can swap the standard system prompt for a better suited one (see below for RAG-tasks).
# Results
Attached a few example outputs. I also developed a few small German benchmarks (based on the Eleuther LM Evaluation Harness), which I will publish soon.
## Example Output
Prompt:
```
Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT:
```
Response EM 7b v01
```
Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert.
Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter
Das Buch enthält also insgesamt 150.000 Wörter.
```
Prompt:
```
Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT:
```
Response EM 7b v01
```
Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind.
1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern.
2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen.
3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist.
4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben.
```
## Factual Retrieval / RAG
EM Models offer a special format for Factual Retrieval/RAG that is optimized for minimizing hallucinations. We use the same format as Airoboros for this:
### Example
Prompt
```
Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT
BEGINCONTEXT
Url: https://www.jph.me
ENDCONTEXT
Das Wetter in Düsseldorf wird heute schön und sonnig!
ENDINPUT
BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT:
```
Model response:
```
Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.
```
Note that the model would normally always answer with the correct response to a trivial question like this but is trained to deny an answer, if there is no source between the BEGININPUT and ENDINPUT that provides a grounding.
If we keep the same Context/Input but ask instead:
```
BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT:
```
We get the correct answer with the source:
```
Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me)
```
# Evaluation
I will publish some custom German benchmarks and the results for them soon.
# Dataset
The dataset is a mixture of multi-turn chats, retrieval instructions and synthetically generated instructions spawning many topics and applications.
## Use with tranformers
Unchanged from the base model:
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'jphme/em_german_7b_v01'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
def ask_model(instruction, system='Du bist ein hilfreicher Assistent.'):
prompt=f"{system} USER: {instruction} ASSISTANT:"
input_tokens=tokenizer(prompt, return_tensors="pt").to(model.device)
output_tokens=model.generate(**input_tokens, max_new_tokens=200)[0]
answer=tokenizer.decode(output_tokens, skip_special_tokens=True)
return answer
print(ask_model("Nenne mir 10 gute Gründe dafür, heute Sport zu machen!"))
```
# Limitations & Biases
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
# Acknowledgements:
Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets.
The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/).
# Contact
I you are interested in customized LLMs for business applications, please get in contact with me via [my website](https://www.jph.me). I am also always happy about suggestions and feedback.
*PS: I am also still searching for a Co-Founder*.
# Disclaimer:
The license on this model does not constitute legal advice. I am not responsible for the actions of third parties who use this model.
This model should only be used for research purposes. The original Llama2 license applies and is distributed with the model files.
-------------
-------------
-------------
# (Deutsch) EM German
**EM German (v01)** ist eine experimentelle, Llama2-basierte KI-Modellreihe, die in deutscher Sprache fine-getuned wurde.
Die Modelle sind für deutschen Text optimiert und können Inhalte in deutscher Sprache verstehen und generieren.
# Links & Demos
Im [Github-Repo](https://github.com/jphme/EM_German) Repo werde ich weitere Beispiele und Code-Snippets veröffentlichen.
## Model Links
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc |
| [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
## Colab:
Einige der Modelle können mit kostenlosen Google Colab-Instanzen verwendet werden (z.B. das 7b-Modell in 8-Bit oder das 13b-Modell mit GPTQ):
* [Beispiel-Notebook 13b mit GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Beispiel-Notebook 7b mit 8-Bit](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* Weitere Beispiele für LLM-Apps folgen in Kürze...
# Eingabeformat
Dieses Modell folgt dem Vicuna-Format ohne Zeilenumbrüche (sollte aber auch mit Zeilenumbrüchen funktionieren). Das Format lautet wie folgt:
```
Du bist ein hilfreicher Assistent. USER: <Anweisung> ASSISTANT:
```
Sie können das Standard-Systemprompt aber auch ändern (siehe unten für ein Beispiel bei Aufgaben mit externen Datenbanken).
# Ergebnisse
Anbei einige Beispiele für Modell-Outputs. Wir werden demnächst einige kleine deutsche Benchmarks veröffentlichen (basierend auf dem Eleuther LM Evaluation Harness), die wir bei diesen Modellen bereits getstet haben.
## Beispiel-Ausgaben
Prompt:
```
Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT:
```
Antwort EM 7b v01
```
Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert.
Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter
Das Buch enthält also insgesamt 150.000 Wörter.
```
Prompt:
```
Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT:
```
Antwort EM 7b v01:
```
Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind.
1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern.
2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen.
3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist.
4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben.
```
## Sachliche Abfrage / RAG
EM-Modelle bieten ein spezielles Format für RAG (also die Verwendung mit Informationen aus zusätzlichen externen Quellen wie Datenbanken oder Websiten), das für die Minimierung von Halluzinationen und Falscinformationen optimiert ist. Wir verwenden dasselbe Format wie Airoboros dafür.
### Beispiel
Prompt
```
Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT
BEGINCONTEXT
Url: https://www.jph.me
ENDCONTEXT
Das Wetter in Düsseldorf wird heute schön und sonnig!
ENDINPUT
BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT:
```
Modellantwort:
```
Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.
```
Hier würde das Modell normalerweise die (triviale) Antwort (2) geben, ist aber trainiert, keine Antwort zu geben, wenn es keine Quelle zwischen BEGININPUT und ENDINPUT gibt, die eine belegbare Antwort liefert.
Wenn wir anstattdessen fragen:
```
BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT:
```
Bekommen wir die korrekte Antwort mit Quellenangabe
```
Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me)
```
# Evaluation
coming soon
# Datensatz
Der Datensatz ist eine Mischung aus Multi-Turn-Chats, RAG Instructions und synthetisch generierten Finetuning-Daten, die viele Themen und Anwendungen abdecken.
# Einschränkungen & Vorurteile
Dieses Modell kann sachlich falsche Ausgaben produzieren und sollte nur zu Forschubngszwecken verwendet werden.
# Danksagungen:
Vielen Dank an [winglian/caseus](https://huggingface.co/winglian) für seine großartige Arbeit an Axolotl, die ich zur Schulung der EM-Modelle verwendet habe. Ich bin auch [Jon Durbin](https://huggingface.co/jondurbin) für seine Arioboros Modelle und Code-Schnipsel dankbar, von denen ich viele Ideen nutzen konnte.
Das 70b-Modell wurde mit Unterstützung des [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/) trainiert.
# Kontakt
Wenn Sie an customized LLMs für geschäftliche Anwendungen interessiert sind, kontaktieren Sie mich bitte über [meine Website](https://www.jph.me). Ich freue mich auch immer über Anregungen und Feedback zu meinen Modellen.
*PS: Ich suche auch immer noch einen Co-Founder für unser Startup, das sich noch im Stealth-Modus befindet.*
# Haftungsausschluss:
Ich bin nicht verantwortlich für die Handlungen Dritter, die dieses Modell verwenden. Dieses Modell sollte nur für Forschungszwecke verwendet werden. Die ursprüngliche Llama2-Lizenz gilt und wird mit den Modell-Dateien verteilt.
<!-- original-model-card end -->
|
BAAI/EVA-CLIP-8B | BAAI | "2024-02-07T02:29:37Z" | 1,255 | 43 | transformers | [
"transformers",
"pytorch",
"clip",
"feature-extraction",
"custom_code",
"dataset:laion/laion2B-en",
"dataset:kakaobrain/coyo-700m",
"arxiv:2402.04252",
"license:apache-2.0",
"region:us"
] | feature-extraction | "2024-02-05T06:36:46Z" | ---
license: apache-2.0
datasets:
- laion/laion2B-en
- kakaobrain/coyo-700m
---
<div align="center">
<h2><a href="https://arxiv.org/abs/2402.04252">EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters</a></h2>
[Quan Sun](https://github.com/Quan-Sun)<sup>1*</sup>, [Jinsheng Wang](https://github.com/Wolfwjs/)<sup>1*</sup>, [Qiying Yu](https://yqy2001.github.io)<sup>1,2*</sup>, [Yufeng Cui](https://scholar.google.com/citations?hl=en&user=5Ydha2EAAAAJ)<sup>1</sup>, [Fan Zhang](https://scholar.google.com/citations?user=VsJ39HMAAAAJ)<sup>1</sup>, [Xiaosong Zhang](https://zhangxiaosong18.github.io)<sup>1</sup>, [Xinlong Wang](https://www.xloong.wang/)<sup>1</sup>
<sup>1</sup> [BAAI](https://www.baai.ac.cn/english.html), <sup>2</sup> [THU](https://air.tsinghua.edu.cn) <br><sup>*</sup> equal contribution
[Paper](https://arxiv.org/abs/2402.04252) | [Github](https://github.com/baaivision/EVA/tree/master/EVA-CLIP-18B)
</div>
Scaling up contrastive language-image pretraining (CLIP) is critical for empowering both vision and multimodal models. We present EVA-CLIP-18B, the largest and most powerful open-source CLIP model to date, with 18-billion parameters. With only 6-billion training samples seen, EVA-CLIP-18B achieves an exceptional **80.7%** zero-shot top-1 accuracy averaged across 27 widely recognized image classification benchmarks, outperforming its forerunner EVA-CLIP (5-billion parameters) and other open-source CLIP models by a large margin. Remarkably, we observe a consistent performance improvement with the model size scaling of EVA-CLIP, despite maintaining a constant training dataset of 2-billion image-text pairs from LAION-2B and COYO-700M. This dataset is openly available and much smaller than the in-house datasets (e.g., DFN-5B, WebLI-10B) employed in other state-of-the-art CLIP models. EVA-CLIP-18B demonstrates the potential of EVA-style weak-to-strong visual model scaling. With our model weights made publicly available, we hope to facilitate future research in vision and multimodal foundation models.
**Table of Contents**
- [Summary of EVA-CLIP performance](#summary-of-eva-clip-performance)
- [Model Card](#model-card)
- [EVA-CLIP-8B](#eva-clip-8b)
- [EVA-CLIP-18B](#eva-clip-18b)
- [Usage](#usage)
- [BibTeX \& Citation](#bibtex--citation)
## Summary of EVA-CLIP performance

Scaling behavior of EVA-CLIP with zero-shot classification performance averaged across 27 image classification benchmarks, compared with the current state-of-the-art and largest CLIP models (224px). The diameter of each circle demonstrates the forward GFLOPs × the number of training samples seen. The performance of EVA-CLIP consistently improves as scaling up.
## Model Card
### EVA-8B
<div align="center">
| model name | total #params | seen samples | pytorch weight |
|:-----------|:------:|:------:|:------:|
| `EVA_8B_psz14` | 7.5B | 6B | [PT](https://huggingface.co/BAAI/EVA-CLIP-8B/resolve/main/EVA_8B_psz14.bin) (`31.0GB`) |
</div>
### EVA-CLIP-8B
> Image encoder MIM teacher: [EVA02_CLIP_E_psz14_plus_s9B](https://huggingface.co/QuanSun/EVA-CLIP/blob/main/EVA02_CLIP_E_psz14_s4B.pt).
<div align="center">
| model name | image enc. init. ckpt | text enc. init. ckpt | total #params | training data | training batch size | gpus for training | img. cls. avg. acc. | video cls. avg. acc. | retrieval MR | hf weight | pytorch weight |
|:-----|:-----|:-----------|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| `EVA-CLIP-8B` | `EVA_8B_psz14` | `EVA02_CLIP_E_psz14_plus_s9B` | 8.1B | Merged-2B | 178K | 384 A100(40GB) | **79.4** | **73.6** | **86.2**| [🤗 HF](https://huggingface.co/BAAI/EVA-CLIP-8B) | [PT](https://huggingface.co/BAAI/EVA-CLIP-8B/resolve/main/EVA_CLIP_8B_psz14_s9B.pt) (`32.9GB`)|
| `EVA-CLIP-8B-448` | `EVA-CLIP-8B` | `EVA-CLIP-8B` | 8.1B | Merged-2B | 24K | 384 A100(40GB) | **80.0** | **73.7** | **86.4** | [🤗 HF](https://huggingface.co/BAAI/EVA-CLIP-8B-448) | [PT](https://huggingface.co/BAAI/EVA-CLIP-8B-448/resolve/main/EVA_CLIP_8B_psz14_plus_s0.6B.pt) (`32.9GB`)|
</div>
### EVA-CLIP-18B
> Image encoder MIM teacher: [EVA02_CLIP_E_psz14_plus_s9B](https://huggingface.co/QuanSun/EVA-CLIP/blob/main/EVA02_CLIP_E_psz14_s4B.pt).
<div align="center">
| model name | image enc. init. ckpt | text enc. init. ckpt | total #params | training data | training batch size | gpus for training | img. cls. avg. acc. | video cls. avg. acc. | retrieval MR | hf weight | pytorch weight |
|:-----|:-----|:-----------|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| `EVA-CLIP-18B` | `EVA_18B_psz14` | `EVA02_CLIP_E_psz14_plus_s9B` | 18.1B | Merged-2B+ | 108K | 360 A100(40GB) | **80.7** | **75.0** | **87.8**| stay tuned | stay tuned |
</div>
- To construct Merged-2B, we merged 1.6 billion samples from [LAION-2B](https://laion.ai/blog/laion-5b/) dataset with 0.4 billion samples from [COYO-700M](https://github.com/kakaobrain/coyo-dataset).
- The Merged-2B+ consists of all samples from Merged-2B, along with 20 millions samples from [LAION-COCO](https://laion.ai/blog/laion-coco/) and 23 millions samples from Merged-video including [VideoCC](https://github.com/google-research-datasets/videoCC-data), [InternVid](https://huggingface.co/datasets/OpenGVLab/InternVid) and [WebVid-10M](https://maxbain.com/webvid-dataset/). Merged-video was added at the end of the training process.
**It's important to note that all results presented in the paper are evaluated using PyTorch weights. There may be differences in performance when using Hugging Face (hf) models.**
## Zero-Shot Evaluation
We use [CLIP-Benchmark](https://github.com/LAION-AI/CLIP_benchmark) to evaluate the zero-shot performance of EVA-CLIP models. Following [vissl](https://github.com/facebookresearch/vissl/blob/main/extra_scripts/datasets/create_k700_data_files.py), we evauate the zero-shot video classification using 1 middle frame. Further details regarding the evaluation datasets can be found in our paper, particularly in Table 11.
## Usage
### Huggingface Version
```python
from PIL import Image
from transformers import AutoModel, AutoConfig
from transformers import CLIPImageProcessor, pipeline, CLIPTokenizer
import torch
import torchvision.transforms as T
from torchvision.transforms import InterpolationMode
image_path = "CLIP.png"
model_name_or_path = "BAAI/EVA-CLIP-8B" # or /path/to/local/EVA-CLIP-8B
image_size = 224
processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14")
# use image processor with conig
# processor = CLIPImageProcessor(size={"shortest_edge":image_size}, do_center_crop=True, crop_size=image_size)
## you can also directly use the image processor by torchvision
## squash
# processor = T.Compose(
# [
# T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
# T.Resize((image_size, image_size), interpolation=InterpolationMode.BICUBIC),
# T.ToTensor(),
# T.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
# ]
# )
## shortest
## processor = T.Compose(
# [
# T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
# T.Resize(image_size, interpolation=InterpolationMode.BICUBIC),
# T.CenterCrop(image_size),
# T.ToTensor(),
# T.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
# ]
# )
model = AutoModel.from_pretrained(
model_name_or_path,
torch_dtype=torch.float16,
trust_remote_code=True).to('cuda').eval()
image = Image.open(image_path)
captions = ["a diagram", "a dog", "a cat"]
tokenizer = CLIPTokenizer.from_pretrained(model_name_or_path)
input_ids = tokenizer(captions, return_tensors="pt", padding=True).input_ids.to('cuda')
input_pixels = processor(images=image, return_tensors="pt", padding=True).pixel_values.to('cuda')
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(input_pixels)
text_features = model.encode_text(input_ids)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
label_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print(f"Label probs: {label_probs}")
```
### Pytorch version
Go to [GitHub](https://github.com/baaivision/EVA/tree/master/EVA-CLIP-18B)
```python
import torch
from eva_clip import create_model_and_transforms, get_tokenizer
from PIL import Image
model_name = "EVA-CLIP-8B"
pretrained = "eva_clip" # or "/path/to/EVA_CLIP_8B_psz14_s9B.pt"
image_path = "CLIP.png"
caption = ["a diagram", "a dog", "a cat"]
device = "cuda" if torch.cuda.is_available() else "cpu"
model, _, processor = create_model_and_transforms(model_name, pretrained, force_custom_clip=True)
tokenizer = get_tokenizer(model_name)
model = model.to(device)
image = processor(Image.open(image_path)).unsqueeze(0).to(device)
text = tokenizer(["a diagram", "a dog", "a cat"]).to(device)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
You can leverage [deepspeed.zero.Init()](https://deepspeed.readthedocs.io/en/stable/zero3.html#constructing-massive-models) with deepspeed zero stage 3 if you have limited CPU memory. For loading a pretrained checkpoint in the context of using deepspeed.zero.Init(), it's advised to use the `load_zero_partitions()` function in `eva_clip/factory.py`.
## BibTeX & Citation
```
@article{EVA-CLIP-18B,
title={EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters},
author={Quan Sun and Jinsheng Wang and Qiying Yu and Yufeng Cui and Fan Zhang and Xiaosong Zhang and Xinlong Wang},
journal={arXiv preprint arXiv:2402.04252},
year={2023}
}
``` |
llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0 | llm-jp | "2024-04-30T02:39:02Z" | 1,255 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:llm-jp/databricks-dolly-15k-ja",
"dataset:llm-jp/oasst1-21k-en",
"dataset:llm-jp/oasst1-21k-ja",
"dataset:llm-jp/oasst2-33k-en",
"dataset:llm-jp/oasst2-33k-ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T17:57:09Z" | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
library_name: transformers
pipeline_tag: text-generation
inference: false
datasets:
- databricks/databricks-dolly-15k
- llm-jp/databricks-dolly-15k-ja
- llm-jp/oasst1-21k-en
- llm-jp/oasst1-21k-ja
- llm-jp/oasst2-33k-en
- llm-jp/oasst2-33k-ja
---
# llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0
This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan.
| Model Variant |
| :--- |
|**Instruction models**|
| [llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
| [llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
| [llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
| |
| :--- |
|**Pre-trained models**|
| [llm-jp-13b-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-v2.0) |
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
{"role": "user", "content": "自然言語処理とは何か"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 256B
|Model|Params|Layers|Hidden size|Heads|Context length|
|:---:|:---:|:---:|:---:|:---:|:---:|
|13b model|13b|40|5120|40|4096|
## Training
- **Pre-training:**
- **Hardware:** 128 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** Megatron-LM
- **Instruction tuning:**
- **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** [TRL](https://github.com/huggingface/trl) and [DeepSpeed](https://github.com/microsoft/DeepSpeed)
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v2.2 (100k: code20K_en40K_ja60K.ver2.2)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model
- **Training algorithm:** Marging Code/English/Japanese vocabularies constructed with SentencePiece Unigram byte-fallback and reestimating scores with the EM-algorithm.
- **Training data:** A subset of the datasets for model pre-training
- **Vocabulary size:** 96,867 (mixed vocabulary of Japanese, English, and source code)
- The acutal size of vocabulary in the pretrained model is 97,024 due to round-up to multiples of 256.
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.4B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v2)|130.7B
|English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|4.7B
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|110.3B
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|8.7B
### Instruction tuning
The models have been fine-tuned on the following datasets.
| Language | Dataset | description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-001](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset |
| |[answer-carefully-001](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed Japanese instruction dataset focusing on LLMs' safety |
| |[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) translated into Japanese using DeepL |
| |[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) translated into Japanese using DeepL |
| |[oasst2-33k-ja](https://huggingface.co/datasets/llm-jp/oasst2-33k-ja)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) translated into Japanese using DeepL |
|English |[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | - |
| |[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) |
| |[oasst2-33k-en](https://huggingface.co/datasets/llm-jp/oasst2-33k-en)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) |
## Evaluation
You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) (v1.3.0) for the evaluation.
Besides, we used LLM-as-a-judge frameworks, [Japanese Vicuna QA Benchmark](https://github.com/ku-nlp/ja-vicuna-qa-benchmark/) and [Japanese MT Bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge), for evaluation.
For details, please refer to [our technical blog](https://llm-jp.nii.ac.jp/blog/2024/04/30/v2.0-release.html) (in Japanese).
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Namgi Han, Tatsuya Hiraoka, Hirokazu Kiyomaru, Takashi Kodama, and Hiroshi Matsuda. |
pszemraj/mega-ar-350m-v0.13 | pszemraj | "2024-05-15T02:39:01Z" | 1,255 | 0 | transformers | [
"transformers",
"safetensors",
"mega",
"text-generation",
"generated_from_trainer",
"en",
"dataset:BEE-spoke-data/UltraTextbooks-2.1-fw_mix",
"dataset:BEE-spoke-data/napierone-epub-raw",
"dataset:BEE-spoke-data/knowledge-inoc-concat-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-13T00:54:28Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
language:
- en
datasets:
- BEE-spoke-data/UltraTextbooks-2.1-fw_mix
- BEE-spoke-data/napierone-epub-raw
- BEE-spoke-data/knowledge-inoc-concat-v1
inference:
parameters:
max_new_tokens: 64
do_sample: true
temperature: 0.7
repetition_penalty: 1.10
no_repeat_ngram_size: 6
eta_cutoff: 0.0008
renormalize_logits: true
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: >-
Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made
Destiny. The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: >-
The Harry Potter series, written by J.K. Rowling, begins with the book
titled
example_title: Harry Potter Series
- text: >-
Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: >-
Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: >-
Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and
another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
pipeline_tag: text-generation
---
# mega-ar-350m-v0.13
## Model description
Continued-training of [BEE-spoke-data/mega-ar-350m-L3t-v0.08-ultraTBfw](https://hf.co/BEE-spoke-data/mega-ar-350m-L3t-v0.08-ultraTBfw) on a few more datasets.
It achieves the following results on the evaluation set (`BEE-spoke-data/UltraTextbooks-2.1-fw_mix`):
- Loss: 1.9926
- Accuracy: 0.5885
- Num Input Tokens Seen: 3468165120
## Quick eval
Quick eval for: pszemraj/mega-ar-350m-v0.13
hf (pretrained=pszemraj/mega-ar-350m-v0.13,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8
| Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
|--------------|------:|------|-----:|----------|------:|---|-----:|
|arc_easy | 1|none | 0|acc | 0.4491|± |0.0102|
| | |none | 0|acc_norm | 0.4061|± |0.0101|
|boolq | 2|none | 0|acc | 0.5367|± |0.0087|
|lambada_openai| 1|none | 0|perplexity|55.3308|± |2.3100|
| | |none | 0|acc | 0.3113|± |0.0065|
|openbookqa | 1|none | 0|acc | 0.1760|± |0.0170|
| | |none | 0|acc_norm | 0.2680|± |0.0198|
|piqa | 1|none | 0|acc | 0.6366|± |0.0112|
| | |none | 0|acc_norm | 0.6213|± |0.0113|
|winogrande | 1|none | 0|acc | 0.5036|± |0.0141|
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 80085
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 32
- total_train_batch_size: 96
- total_eval_batch_size: 3
- optimizer: Adam with betas=(0.9,0.985) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
|
cecibas/llama-13b-accelerator | cecibas | "2024-06-08T02:16:23Z" | 1,255 | 0 | transformers | [
"transformers",
"safetensors",
"mlp_speculator",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-07T21:33:50Z" | ---
license: llama2
---
**Original repo**: https://huggingface.co/ibm-fms/llama-13b-accelerator
This repo is just for experimentation.
---
## Installation from source
```bash
git clone https://github.com/foundation-model-stack/fms-extras
cd fms-extras
pip install -e .
```
## Description
This model is intended to be used as an accelerator for [llama 13B (chat)](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) and takes inspiration
from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator,
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
Training is light-weight and can be completed in only a few days depending on base model size and speed.
## Repository Links
1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git)
3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)
## Samples
_Note: For all samples, your environment must have access to cuda_
### Use in IBM Production TGIS
*To try this out running in a production-like environment, please use the pre-built docker image:*
#### Setup
```bash
HF_HUB_CACHE=/hf_hub_cache
chmod a+w $HF_HUB_CACHE
HF_HUB_TOKEN="your huggingface hub token"
TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ddc56ee
docker pull $TGIS_IMAGE
# optionally download llama-2-13b-chat if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
meta-llama/Llama-2-13b-chat-hf \
--token $HF_HUB_TOKEN
# optionally download the speculator model if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
ibm-fms/llama-13b-accelerator \
--token $HF_HUB_TOKEN
# note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directory and refer to them with /models/<model_name>
docker run -d --rm --gpus all \
--name my-tgis-server \
-p 8033:8033 \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
-e MODEL_NAME=meta-llama/Llama-2-13b-chat-hf \
-e SPECULATOR_NAME=ibm-fms/llama-13b-accelerator \
-e FLASH_ATTENTION=true \
-e PAGED_ATTENTION=true \
-e DTYPE=float16 \
$TGIS_IMAGE
# check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
docker logs my-tgis-server -f
# get the client sample (Note: The first prompt will take longer as there is a warmup time)
conda create -n tgis-client-env python=3.11
conda activate tgis-client-env
git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git
cd text-generation-inference/integration_tests
make gen-client
pip install . --no-cache-dir
```
#### Run Sample
```bash
python sample_client.py
```
_Note: first prompt may be slower as there is a slight warmup time_
### Use in Huggingface TGI
#### start the server
```bash
model=ibm-fms/llama-13b-accelerator
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model
```
_note: for tensor parallel, add --num-shard_
#### make a request
```bash
curl 127.0.0.1:8080/generate_stream \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
```
### Minimal Sample
*To try this out with the fms-native compiled model, please execute the following:*
#### Install
```bash
git clone https://github.com/foundation-model-stack/fms-extras
(cd fms-extras && pip install -e .)
pip install transformers==4.35.0 sentencepiece numpy
```
#### Run Sample
##### batch_size=1 (compile + cudagraphs)
```bash
MODEL_PATH=/path/to/llama/hf/13B-F
python fms-extras/scripts/paged_speculative_inference.py \
--variant=13b \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm-fms/llama-13b-accelerator \
--speculator_source=hf \
--speculator_variant=840m \
--compile \
--compile_mode=reduce-overhead
```
##### batch_size=1 (compile)
```bash
MODEL_PATH=/path/to/llama/hf/13B-F
python fms-extras/scripts/paged_speculative_inference.py \
--variant=13b \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm-fms/llama-13b-accelerator \
--speculator_source=hf \
--speculator_variant=840m \
--compile
```
##### batch_size=4 (compile)
```bash
MODEL_PATH=/path/to/llama/hf/13B-F
python fms-extras/scripts/paged_speculative_inference.py \
--variant=13b \
--model_path=$MODEL_PATH \
--model_source=hf \
--tokenizer=$MODEL_PATH \
--speculator_path=ibm-fms/llama-13b-accelerator \
--speculator_source=hf \
--speculator_variant=840m \
--batch_input \
--compile
``` |
cointegrated/rut5-base-multitask | cointegrated | "2023-03-17T14:12:20Z" | 1,254 | 33 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"russian",
"ru",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" |
---
language: ["ru", "en"]
tags:
- russian
license: mit
widget:
- text: "fill | Почему они не ___ на меня?"
---
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) with only some Rusian and English embeddings left.
More details are given in a Russian post: https://habr.com/ru/post/581932/
The model has been fine-tuned for several tasks with sentences or short paragraphs:
* translation (`translate ru-en` and `translate en-ru`)
* Paraphrasing (`paraphrase`)
* Filling gaps in a text (`fill`). The gaps can be denoted as `___` or `_3_`, where `3` is the approximate number of words that should be inserted.
* Restoring the text from a noisy bag of words (`assemble`)
* Simplification of texts (`simplify`)
* Dialogue response generation (`reply` based on fiction and `answer` based on online forums)
* Open-book question answering (`comprehend`)
* Asking questions about a text (`ask`)
* News title generation (`headline`)
For each task, the task name is joined with the input text by the ` | ` separator.
The model can be run with the following code:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-base-multitask")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-base-multitask")
def generate(text, **kwargs):
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(**inputs, num_beams=5, **kwargs)
return tokenizer.decode(hypotheses[0], skip_special_tokens=True)
```
The model can be applied to each of the pretraining tasks:
```
print(generate('translate ru-en | Каждый охотник желает знать, где сидит фазан.'))
# Each hunter wants to know, where he is.
print(generate('paraphrase | Каждый охотник желает знать, где сидит фазан.',
encoder_no_repeat_ngram_size=1, repetition_penalty=0.5, no_repeat_ngram_size=1))
# В любом случае каждый рыбак мечтает познакомиться со своей фермой
print(generate('fill | Каждый охотник _3_, где сидит фазан.'))
# смотрит на озеро
print(generate('assemble | охотник каждый знать фазан сидит'))
# Каждый охотник знает, что фазан сидит.
print(generate('simplify | Местным продуктом-специалитетом с защищённым географическим наименованием по происхождению считается люнебургский степной барашек.', max_length=32))
# Местным продуктом-специалитетом считается люнебургский степной барашек.
print(generate('reply | Помогите мне закадрить девушку'))
# Что я хочу?
print(generate('answer | Помогите мне закадрить девушку'))
# я хочу познакомиться с девушкой!!!!!!!!
print(generate("comprehend | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо. Вопрос: откуда приехал Морган?"))
# из Австралии
print(generate("ask | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
# Что разворачивается на фоне земельного конфликта между владельцами овец и ранчеро?
print(generate("headline | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, "
"прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
# На фоне земельного конфликта разворачивается история любви овцевода Моргана Лейна и Марии Синглетон
```
However, it is strongly recommended that you fine tune the model for your own task. |
alvanlii/whisper-small-cantonese | alvanlii | "2024-06-07T01:37:47Z" | 1,254 | 38 | transformers | [
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_11_0",
"arxiv:2201.02419",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-08T19:22:17Z" | ---
language:
- zh
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
base_model: openai/whisper-small
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small zh-HK - Alvin
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 yue
type: mozilla-foundation/common_voice_16_0
config: yue
split: test
args: yue
metrics:
- name: Normalized CER
type: cer
value: 7.93
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small zh-HK - Alvin
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Cantonese language. It achieves a 7.93 CER (without punctuations), 9.72 CER (with punctuations) on Common Voice 16.0
## Training and evaluation data
For training,
- CantoMap: Winterstein, Grégoire, Tang, Carmen and Lai, Regine (2020) "CantoMap: a Hong Kong Cantonese MapTask Corpus", in Proceedings of The 12th Language Resources and Evaluation Conference, Marseille: European Language Resources Association, p. 2899-2906.
- Cantonse-ASR: Yu, Tiezheng, Frieske, Rita, Xu, Peng, Cahyawijaya, Samuel, Yiu, Cheuk Tung, Lovenia, Holy, Dai, Wenliang, Barezi, Elham, Chen, Qifeng, Ma, Xiaojuan, Shi, Bertram, Fung, Pascale (2022) "Automatic Speech Recognition Datasets in Cantonese: A Survey and New Dataset", 2022. Link: https://arxiv.org/pdf/2201.02419.pdf
|Name|# of Hours|
|--|--|
|Common Voice 16.0 zh-HK Train|138|
|Common Voice 16.0 yue Train|85|
|Common Voice 17.0 yue Train|178|
|Cantonese-ASR|72|
|CantoMap|23|
|[Pseudo-Labelled YouTube Data](https://huggingface.co/datasets/alvanlii/cantonese-youtube-pseudo-transcription)|438|
For evaluation, Common Voice 16.0 yue Test set is used.
## Results
- CER (lower is better): 0.0972
- down from 0.1073, 0.1581 in the previous versions
- CER (punctuations removed): 0.0793
- GPU Inference with Fast Attention (example below): 0.055s/sample
- Note all GPU evaluations are done on RTX 3090 GPU
- GPU Inference: 0.308s/sample
- CPU Inference: 2.57s/sample
- GPU VRAM: ~1.5 GB
## Using the Model
```
import librosa
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
y, sr = librosa.load('audio.mp3', sr=16000)
MODEL_NAME = "alvanlii/whisper-small-cantonese"
processor = WhisperProcessor.from_pretrained(MODEL_NAME)
model = WhisperForConditionalGeneration.from_pretrained(MODEL_NAME)
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
model.config.use_cache = False
processed_in = processor(y, sampling_rate=sr, return_tensors="pt")
gout = model.generate(
input_features=processed_in.input_features,
output_scores=True, return_dict_in_generate=True
)
transcription = processor.batch_decode(gout.sequences, skip_special_tokens=True)[0]
print(transcription)
```
- Alternatively, you can use huggingface pipelines
```
from transformers import pipeline
MODEL_NAME = "alvanlii/whisper-small-cantonese"
lang = "zh"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
chunk_length_s=30,
device=device,
)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task="transcribe")
text = pipe(file)["text"]
```
## Model Speedup
Just add attn_implementation="sdpa" for Flash Attention.
```
model = AutoModelForSpeechSeq2Seq.from_pretrained(
"alvanlii/whisper-small-cantonese",
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
use_safetensors=True,
attn_implementation="sdpa",
)
```
Using Flash Attention reduced the amount of time taken per sample from 0.308s to 0.055s.
## Speculative Decoding
You can use a bigger model, then use `alvanlii/whisper-small-cantonese` to speed up inference with basically no loss in accuracy.
```
model_id = "simonl0909/whisper-large-v2-cantonese"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
use_safetensors=True,
attn_implementation="sdpa",
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
assistant_model_id = "alvanlii/whisper-small-cantonese"
assistant_model = AutoModelForSpeechSeq2Seq.from_pretrained(
assistant_model_id,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
use_safetensors=True,
attn_implementation="sdpa",
)
assistant_model.to(device)
...
model.generate(**inputs, use_cache=True, assistant_model=assistant_model)
```
In the original `simonl0909/whisper-large-v2-cantonese` model, it runs at 0.714s/sample for a CER of 7.65. \
Using speculative decoding with `alvanlii/whisper-small-cantonese`, it runs at 0.137s/sample for a CER of 7.67, which is much faster.
## Whisper.cpp
Uploaded a GGML bin file for Whisper cpp as of June 2024. You can download the bin file [here](https://huggingface.co/alvanlii/whisper-small-cantonese/blob/main/ggml-model.bin) and try it out [here](https://whisper.ggerganov.com/).
## Training Hyperparameters
- learning_rate: 5e-5
- train_batch_size: 25 (on 1 3090 GPU)
- eval_batch_size: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 25x4=100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 15000
- augmentation: None
|
digiplay/RunDiffusionFX2.5D_v1_diffusers | digiplay | "2023-12-19T17:24:42Z" | 1,254 | 9 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-06-03T22:33:29Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info: https://civitai.com/models/82981/rundiffusion-fx-25d
Sample images I made:


|
tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity | tomaarsen | "2023-12-06T09:09:41Z" | 1,254 | 0 | setfit | [
"setfit",
"pytorch",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"en",
"dataset:tomaarsen/setfit-absa-semeval-restaurants",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"region:us"
] | text-classification | "2023-12-04T14:48:52Z" | ---
language: en
license: apache-2.0
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- tomaarsen/setfit-absa-semeval-restaurants
metrics:
- accuracy
widget:
- text: (both in quantity AND quality):The Prix Fixe menu is worth every penny and
you get more than enough (both in quantity AND quality).
- text: over 100 different beers to offer thier:The have over 100 different beers
to offer thier guest so that made my husband very happy and the food was delicious,
if I must recommend a dish it must be the pumkin tortelini.
- text: back with a plate of dumplings.:Get your food to go, find a bench, and kick
back with a plate of dumplings.
- text: the udon was soy sauce and water.:The soup for the udon was soy sauce and
water.
- text: times for the beef cubes - they're:i've been back to nha trang literally a
hundred times for the beef cubes - they're that good.
pipeline_tag: text-classification
inference: false
co2_eq_emissions:
emissions: 15.732253126728272
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.174
hardware_used: 1 x NVIDIA GeForce RTX 3090
base_model: BAAI/bge-small-en-v1.5
model-index:
- name: SetFit Polarity Model with BAAI/bge-small-en-v1.5 on SemEval 2014 Task 4 (Restaurants)
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: SemEval 2014 Task 4 (Restaurants)
type: tomaarsen/setfit-absa-semeval-restaurants
split: test
metrics:
- type: accuracy
value: 0.748561042108452
name: Accuracy
---
# SetFit Polarity Model with BAAI/bge-small-en-v1.5 on SemEval 2014 Task 4 (Restaurants)
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [SemEval 2014 Task 4 (Restaurants)](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-restaurants) dataset that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_lg
- **SetFitABSA Aspect Model:** [tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect](https://huggingface.co/tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect)
- **SetFitABSA Polarity Model:** [tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity](https://huggingface.co/tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity)
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
- **Training Dataset:** [SemEval 2014 Task 4 (Restaurants)](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-restaurants)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| negative | <ul><li>'But the staff was so horrible:But the staff was so horrible to us.'</li><li>', forgot our toast, left out:They did not have mayonnaise, forgot our toast, left out ingredients (ie cheese in an omelet), below hot temperatures and the bacon was so over cooked it crumbled on the plate when you touched it.'</li><li>'did not have mayonnaise, forgot our:They did not have mayonnaise, forgot our toast, left out ingredients (ie cheese in an omelet), below hot temperatures and the bacon was so over cooked it crumbled on the plate when you touched it.'</li></ul> |
| positive | <ul><li>"factor was the food, which was:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"The food is uniformly exceptional:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li><li>"a very capable kitchen which will proudly:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li></ul> |
| neutral | <ul><li>"'s on the menu or not.:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li><li>'to sample both meats).:Our agreed favorite is the orrechiete with sausage and chicken (usually the waiters are kind enough to split the dish in half so you get to sample both meats).'</li><li>'to split the dish in half so:Our agreed favorite is the orrechiete with sausage and chicken (usually the waiters are kind enough to split the dish in half so you get to sample both meats).'</li></ul> |
| conflict | <ul><li>'The food was delicious but:The food was delicious but do not come here on a empty stomach.'</li><li>"The service varys from day:The service varys from day to day- sometimes they're very nice, and sometimes not."</li><li>'Though the Spider Roll may look like:Though the Spider Roll may look like a challenge to eat, with soft shell crab hanging out of the roll, it is well worth the price you pay for them.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7486 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect",
"tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 6 | 22.4980 | 51 |
| Label | Training Sample Count |
|:---------|:----------------------|
| conflict | 6 |
| negative | 43 |
| neutral | 36 |
| positive | 170 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (5, 5)
- max_steps: 5000
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0078 | 1 | 0.2397 | - |
| 0.3876 | 50 | 0.2252 | - |
| 0.7752 | 100 | 0.1896 | 0.1883 |
| 1.1628 | 150 | 0.0964 | - |
| **1.5504** | **200** | **0.0307** | **0.1792** |
| 1.9380 | 250 | 0.0275 | - |
| 2.3256 | 300 | 0.0138 | 0.2036 |
| 2.7132 | 350 | 0.006 | - |
| 3.1008 | 400 | 0.0035 | 0.2287 |
| 3.4884 | 450 | 0.0015 | - |
| 3.8760 | 500 | 0.0016 | 0.2397 |
| 4.2636 | 550 | 0.001 | - |
| 4.6512 | 600 | 0.0009 | 0.2477 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.016 kg of CO2
- **Hours Used**: 0.174 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.9.16
- SetFit: 1.0.0.dev0
- Sentence Transformers: 2.2.2
- spaCy: 3.7.2
- Transformers: 4.29.0
- PyTorch: 1.13.1+cu117
- Datasets: 2.15.0
- Tokenizers: 0.13.3
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
neuralmagic/llama-2-7b-chat-marlin | neuralmagic | "2024-01-18T20:08:45Z" | 1,254 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"marlin",
"region:us"
] | text-generation | "2024-01-18T19:36:20Z" | ---
license: llama2
language:
- en
library_name: transformers
---
## llama-2-7b-chat-marlin
Example of converting a GPTQ model to Marlin format for fast batched decoding with [Marlin Kernels](https://github.com/IST-DASLab/marlin)
### Install Marlin
```bash
pip install torch
git clone https://github.com/IST-DASLab/marlin.git
cd marlin
pip install -e .
```
### Convert Model
Convert the model from GPTQ to Marlin format. Note that this requires:
- `sym=true`
- `group_size=128`
- `desc_activations=false`
```bash
pip install -U transformers accelerate auto-gptq optimum
```
Convert with the `convert.py` script in this repo:
```bash
python3 convert.py --model-id "TheBloke/Llama-2-7B-Chat-GPTQ" --save-path "./marlin-model" --do-generation
```
### Run Model
Load with the `load.load_model` utility from this repo and run inference as usual.
```python
from load import load_model
from transformers import AutoTokenizer
# Load model from disk.
model_path = "./marlin-model"
model = load_model(model_path).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Generate text.
inputs = tokenizer("My favorite song is", return_tensors="pt")
inputs = {k: v.to("cuda") for k, v in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=False)
print(tokenizer.batch_decode(outputs)[0])
```
|
Peilin-CodeVersion/Airline_Sentiment_Classifier_DistilBERT | Peilin-CodeVersion | "2024-06-01T10:18:08Z" | 1,254 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-30T16:19:44Z" | ---
license: apache-2.0
---
|
rajtest/t_llama_gguf_v3 | rajtest | "2024-07-02T18:14:59Z" | 1,254 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"license:unknown",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-06-30T19:10:59Z" | ---
license: unknown
---
|
mgoin/llama2.c-stories15M-quant-ds | mgoin | "2023-10-27T17:06:03Z" | 1,253 | 0 | transformers | [
"transformers",
"onnx",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-27T16:12:10Z" | Entry not found |
skumar9/Llama-medx_v3 | skumar9 | "2024-05-01T21:22:18Z" | 1,253 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-01T21:15:07Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
The LLM is designed to perform a wide range of natural language processing tasks, including but not limited to text generation,language translation, and question answering. It is suitable for both research and practical applications in industries such as healthcare
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
skratos115/qwen2-7b-OpenDevin-q5_k_m | skratos115 | "2024-06-28T18:27:22Z" | 1,253 | 0 | null | [
"gguf",
"text-generation",
"qwen2",
"instruct",
"unsloth",
"OpenDevin",
"dataset:xingyaoww/opendevin-code-act",
"license:mit",
"region:us"
] | text-generation | "2024-06-27T21:27:38Z" | ---
license: mit
tags:
- text-generation
- qwen2
- instruct
- unsloth
- OpenDevin
datasets:
- xingyaoww/opendevin-code-act
---
## Qwen2.7b.OpenDevin
brought to you by skratos115 (HF) / Kingatlas115 (GH) in colaboration with the official Opendevin Team ~xingyaoww
# Qwen2-7B-Instruct with OpenDevin Tool Calling
## Overview
This project involves the fine-tuning of the `Qwen2-7B-Instruct` model using the [opendevin-code-act dataset](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) with the help of Unsloth. The primary goal is to develop a more powerful LLM capable of effectively using the CodeAct framework for tool calling. This is still in early development and should not be used in production. We are working on building a bigger dataset for tool paths/ trajectories and could you all the help we can by using the feedback integration to help us build better trajectories and release to the public via MIT license for OSS model training.
read more here:https://x.com/gneubig/status/1802740786242420896 and http://www.linkedin.com/feed/update/urn:li:activity:7208507606728929280/
## Model Details
- **Model Name**: Qwen2-7B-Instruct
- **Dataset**: [opendevin-code-act](https://huggingface.co/datasets/xingyaoww/opendevin-code-act)
- **Training Platform**: Unsloth
provided full merged files
or
Quantized f16, q4_k_m, Q5_k_m, and Q8_0 gguf files.
I used the qwen2.7b.OD.q4_k_m.gguf for my testing and got it to write me a simple script. more testing to come.
## Running the Model
You can run this model using `vLLM` or `ollama`. The following instructions are for using `ollama`.
### Prerequisites
- Docker
- Hugging Face `transformers` library (version >= 4.37.0 is recommended)
### Running with Ollama
1. **Install Docker**: Ensure you have Docker installed on your machine.
2. **Pull the Latest Hugging Face Transformers**:
pip install transformers>=4.37.0
3. **Set Up Your Workspace**:
WORKSPACE_BASE=$(pwd)/workspace
4. **Run the Docker Command**:
docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e PERSIST_SANDBOX="true" \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://[yourIPhere or 0.0.0.0]:11434" \
-e SSH_PASSWORD="make something up here" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name opendevin-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/opendevin/opendevin:main
Replace `[yourIPhere or 0.0.0.0]` with your actual IP address or use `0.0.0.0` for localhost.
## Early Development
This project is in its early stages, and we are continuously working to improve the model and its capabilities. Contributions and feedback are welcome.
## Support my work
Right now all of my work has been funded personally, if you like my work and can help support growth in the AI community consider joining or donating to my Patreon.
[Patreon Link](https://www.patreon.com/atlasaisecurity)
## License
This project is licensed under the [MIT License](LICENSE).
|
afrideva/Aira-2-1B1-GGUF | afrideva | "2023-12-02T01:04:47Z" | 1,252 | 0 | transformers | [
"transformers",
"gguf",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"dataset:nicholasKluge/instruct-aira-dataset",
"arxiv:1803.05457",
"arxiv:2109.07958",
"arxiv:2203.09509",
"base_model:nicholasKluge/Aira-2-1B1",
"license:apache-2.0",
"co2_eq_emissions",
"region:us"
] | text-generation | "2023-12-02T01:00:43Z" | ---
base_model: nicholasKluge/Aira-2-1B1
co2_eq_emissions:
emissions: 1.78
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
source: CodeCarbon
training_type: fine-tuning
datasets:
- nicholasKluge/instruct-aira-dataset
inference: false
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
model_creator: nicholasKluge
model_name: Aira-2-1B1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- example_title: Greetings
text: <|startofinstruction|>How should I call you?<|endofinstruction|>
- example_title: Machine Learning
text: <|startofinstruction|>Can you explain what is Machine Learning?<|endofinstruction|>
- example_title: Ethics
text: <|startofinstruction|>Do you know anything about virtue ethics?<|endofinstruction|>
- example_title: Advise
text: <|startofinstruction|>How can I make my girlfriend happy?<|endofinstruction|>
---
# nicholasKluge/Aira-2-1B1-GGUF
Quantized GGUF model files for [Aira-2-1B1](https://huggingface.co/nicholasKluge/Aira-2-1B1) from [nicholasKluge](https://huggingface.co/nicholasKluge)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [aira-2-1b1.fp16.gguf](https://huggingface.co/afrideva/Aira-2-1B1-GGUF/resolve/main/aira-2-1b1.fp16.gguf) | fp16 | 2.20 GB |
| [aira-2-1b1.q2_k.gguf](https://huggingface.co/afrideva/Aira-2-1B1-GGUF/resolve/main/aira-2-1b1.q2_k.gguf) | q2_k | 482.15 MB |
| [aira-2-1b1.q3_k_m.gguf](https://huggingface.co/afrideva/Aira-2-1B1-GGUF/resolve/main/aira-2-1b1.q3_k_m.gguf) | q3_k_m | 549.86 MB |
| [aira-2-1b1.q4_k_m.gguf](https://huggingface.co/afrideva/Aira-2-1B1-GGUF/resolve/main/aira-2-1b1.q4_k_m.gguf) | q4_k_m | 667.83 MB |
| [aira-2-1b1.q5_k_m.gguf](https://huggingface.co/afrideva/Aira-2-1B1-GGUF/resolve/main/aira-2-1b1.q5_k_m.gguf) | q5_k_m | 782.06 MB |
| [aira-2-1b1.q6_k.gguf](https://huggingface.co/afrideva/Aira-2-1B1-GGUF/resolve/main/aira-2-1b1.q6_k.gguf) | q6_k | 903.43 MB |
| [aira-2-1b1.q8_0.gguf](https://huggingface.co/afrideva/Aira-2-1B1-GGUF/resolve/main/aira-2-1b1.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
# Aira-2-1B1
`Aira-2` is the second version of the Aira instruction-tuned series. `Aira-2-1B1` is an instruction-tuned GPT-style model based on [TinyLlama-1.1B](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
## Details
- **Size:** 1,261,545,472 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** English
- **Number of Epochs:** 3
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 1.78 KgCO2 (Singapore)
- **Total Energy Consumption:** 3.64 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`What is a language model?`<|endofinstruction|>`A language model is a probability distribution over a vocabulary.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-1B1')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-1B1')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token, return_tensors="pt").to(device)
responses = aira.generate(**inputs,
bos_token_id=tokenizer.bos_token_id,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
top_k=50,
max_length=500,
top_p=0.95,
temperature=0.7,
num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 What is the capital of Brazil?
>>>Response 1: 🤖 The capital of Brazil is Brasília.
>>>Response 2: 🤖 The capital of Brazil is Brasília.
```
## Limitations
🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.
🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.
## Evaluation
| Model (TinyLlama) | Average | [ARC](https://arxiv.org/abs/1803.05457) | [TruthfulQA](https://arxiv.org/abs/2109.07958) | [ToxiGen](https://arxiv.org/abs/2203.09509) |
|---------------------------------------------------------------|-----------|-----------------------------------------|------------------------------------------------|---------------------------------------------|
| [Aira-2-1B1](https://huggingface.co/nicholasKluge/Aira-2-1B1) | **42.55** | 25.26 | **50.81** | **51.59** |
| TinyLlama-1.1B-intermediate-step-480k-1T | 37.52 | **30.89** | 39.55 | 42.13 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)).
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://huggingface.co/nicholasKluge/Aira-2-1B1},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
}
```
## License
The `Aira-2-1B1` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nicholasKluge__Aira-2-1B1)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.19 |
| ARC (25-shot) | 23.21 |
| HellaSwag (10-shot) | 26.97 |
| MMLU (5-shot) | 24.86 |
| TruthfulQA (0-shot) | 50.63 |
| Winogrande (5-shot) | 50.28 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.39 | |
chatdb/natural-sql-7b | chatdb | "2024-02-04T20:48:33Z" | 1,252 | 101 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"instruct",
"finetune",
"conversational",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-02T20:20:56Z" | ---
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- instruct
- finetune
library_name: transformers
license: cc-by-sa-4.0
pipeline_tag: text-generation
---
# **Natural-SQL-7B by ChatDB**
## Natural-SQL-7B is a model with very strong performance in Text-to-SQL instructions, has an excellent understanding of complex questions, and outperforms models of the same size in its space.
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/hafdsfrFCqrVbATIzV_EN.png" width="600">
[ChatDB.ai](https://chatdb.ai) | [Notebook](https://github.com/cfahlgren1/natural-sql/blob/main/natural-sql-7b.ipynb) | [Twitter](https://twitter.com/calebfahlgren)
# **Benchmarks**
### *Results on Novel Datasets not trained on via SQL-Eval*
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/5ynfoKPzI3_-WasQQt7qR.png" width="800">
<em>Big thanks to the [defog](https://huggingface.co/defog) team for open sourcing [sql-eval](https://github.com/defog-ai/sql-eval)</em>👏
Natural-SQL also can handle complex, compound questions that other models typically struggle with. There is a more detailed writeup Here is a write up, small test done [here](https://chatdb.ai/post/naturalsql-vs-sqlcoder-for-text-to-sql).
# Usage
Make sure you have the correct version of the transformers library installed:
```sh
pip install transformers==4.35.2
```
### Loading the Model
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("chatdb/natural-sql-7b")
model = AutoModelForCausalLM.from_pretrained(
"chatdb/natural-sql-7b",
device_map="auto",
torch_dtype=torch.float16,
)
```
### **License**
The model weights are licensed under `CC BY-SA 4.0`, with extra guidelines for responsible use expanded from the original model's [Deepseek](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) license.
You're free to use and adapt the model, even commercially.
If you alter the weights, such as through fine-tuning, you must publicly share your changes under the same `CC BY-SA 4.0` license.
### Generating SQL
```python
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(
**inputs,
num_return_sequences=1,
eos_token_id=100001,
pad_token_id=100001,
max_new_tokens=400,
do_sample=False,
num_beams=1,
)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs[0].split("```sql")[-1])
```
# Prompt Template
```
# Task
Generate a SQL query to answer the following question: `{natural language question}`
### PostgreSQL Database Schema
The query will run on a database with the following schema:
<SQL Table DDL Statements>
# SQL
Here is the SQL query that answers the question: `{natural language question}`
'''sql
```
# Example SQL Output
### Example Schemas
```sql
CREATE TABLE users (
user_id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL,
password_hash TEXT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE projects (
project_id SERIAL PRIMARY KEY,
project_name VARCHAR(100) NOT NULL,
description TEXT,
start_date DATE,
end_date DATE,
owner_id INTEGER REFERENCES users(user_id)
);
CREATE TABLE tasks (
task_id SERIAL PRIMARY KEY,
task_name VARCHAR(100) NOT NULL,
description TEXT,
due_date DATE,
status VARCHAR(50),
project_id INTEGER REFERENCES projects(project_id)
);
CREATE TABLE taskassignments (
assignment_id SERIAL PRIMARY KEY,
task_id INTEGER REFERENCES tasks(task_id),
user_id INTEGER REFERENCES users(user_id),
assigned_date DATE NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE comments (
comment_id SERIAL PRIMARY KEY,
content TEXT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
task_id INTEGER REFERENCES tasks(task_id),
user_id INTEGER REFERENCES users(user_id)
);
```
### Example SQL Outputs
**Question**: **Show me the day with the most users joining**
```sql
SELECT created_at::DATE AS day, COUNT(*) AS user_count
FROM users
GROUP BY day
ORDER BY user_count DESC
LIMIT 1;
```
**Question**: **Show me the project that has a task with the most comments**
```sql
SELECT p.project_name, t.task_name, COUNT(c.comment_id) AS comment_count
FROM projects p
JOIN tasks t ON p.project_id = t.project_id
JOIN comments c ON t.task_id = c.task_id
GROUP BY p.project_name, t.task_name
ORDER BY comment_count DESC
LIMIT 1;
```
**Question**: **What is the ratio of users with gmail addresses vs without?**
```sql
SELECT
SUM(CASE WHEN email ILIKE '%@gmail.com%' THEN 1 ELSE 0 END)::FLOAT / NULLIF(SUM(CASE WHEN email NOT ILIKE '%@gmail.com%' THEN 1 ELSE 0 END), 0) AS gmail_ratio
FROM
users;
``` |
John6666/titania-mix-realistic-pony-gbv20-sdxl | John6666 | "2024-06-27T03:26:25Z" | 1,252 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"cosplay",
"boobs",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-27T03:21:59Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- cosplay
- boobs
- pony
---
Original model is [here](https://civitai.com/models/349587/titaniamix-realistic-pony-sd15?modelVersionId=600750).
|
michellejieli/inappropriate_text_classifier | michellejieli | "2022-12-10T20:08:21Z" | 1,251 | 8 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"distilroberta",
"sentiment",
"NSFW",
"inappropriate",
"spam",
"twitter",
"reddit",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-12-10T20:00:03Z" | ---
license: creativeml-openrail-m
language: "en"
tags:
- distilroberta
- sentiment
- NSFW
- inappropriate
- spam
- twitter
- reddit
widget:
- text: "I like you. You remind me of me when I was young and stupid."
- text: "I see you’ve set aside this special time to humiliate yourself in public."
- text: "Have a great weekend! See you next week!"
---
# Fine-tuned DistilBERT for NSFW Inappropriate Text Classification
# Model Description
DistilBERT is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW).
The model is a fine-tuned version of [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert).
It was fine-tuned on 19604 Reddit posts pulled from the [Comprehensive Abusiveness Detection Dataset](https://aclanthology.org/2021.conll-1.43/).
# How to Use
```python
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="michellejieli/inappropriate_text_classifier")
classifier("I see you’ve set aside this special time to humiliate yourself in public.")
```
```python
Output:
[{'label': 'NSFW', 'score': 0.9684491753578186}]
```
# Contact
Please reach out to [[email protected]](mailto:[email protected]) if you have any questions or feedback.
# Reference
```
Hoyun Song, Soo Hyun Ryu, Huije Lee, and Jong Park. 2021. A Large-scale Comprehensive Abusiveness Detection Dataset with Multifaceted Labels from Reddit. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 552–561, Online. Association for Computational Linguistics.
```
---
|
kenhktsui/nano-phi-115M-v0.1 | kenhktsui | "2024-04-06T07:12:46Z" | 1,251 | 4 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"en",
"dataset:kenhktsui/minipile_quality_score_v1",
"dataset:kenhktsui/simple_wikipedia_LM_quality_score_v1",
"dataset:kenhktsui/refinedweb-3m_quality_score_v1",
"dataset:kenhktsui/TM-DATA_quality_score_v1",
"dataset:kenhktsui/openwebtext_quality_score_v1",
"arxiv:1910.09700",
"doi:10.57967/hf/2266",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-02T14:46:23Z" | ---
language:
- en
license: mit
library_name: transformers
inference:
parameters:
max_new_tokens: 64
do_sample: true
temperature: 0.1
repetition_penalty: 10
no_repeat_ngram_size: 4
eta_cutoff: 0.0006
renormalize_logits: true
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: >-
Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made
Destiny. The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: >-
The Harry Potter series, written by J.K. Rowling, begins with the book
titled
example_title: Harry Potter Series
- text: >-
Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: >-
Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: >-
Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and
another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
pipeline_tag: text-generation
model-index:
- name: nano-phi-115M-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 21.93
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 27.86
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.34
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.83
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
datasets:
- kenhktsui/minipile_quality_score_v1
- kenhktsui/simple_wikipedia_LM_quality_score_v1
- kenhktsui/refinedweb-3m_quality_score_v1
- kenhktsui/TM-DATA_quality_score_v1
- kenhktsui/openwebtext_quality_score_v1
---
# Model Card for nano-phi-115M-v0.1
Inspired by [Phi2](https://huggingface.co/microsoft/phi-2), and open source small language model attempts like [smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA).
Pre-trained with training 7B token **from scratch**, with application of quality filter to datasets resulting in 0.26B token.
The control is [kenhktsui/nano-phi-115M-control-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-control-v0.1), where full dataset (0.6B) is used.
Not much degradation in performance despite only using **42%** of the data due to the effective quality filter ("quality_score_v1" > 0.5).
In fact, upon inspection, the 6000 steps chkpt achieves similar performance as this model, signaling underlying **effective training due to high quality data**.
It just took 1d to train in Colab with a A100 40GB (**<USD$ 50**).
It achieves quite competitive results in evaluation given its training token, and training data size.
Yet, there are still large gaps (particularly in ARC, HellaSwag, MMLU and GSM8K) between nano-phi-115M-v0.1 and phi-2, where author will attempt to narrow down the gap in the future.
No alignment has been done yet.
## How to use
To use the model, you will need transformer version >= 4.37.2
```
pip install transformers>=4.37.2
```
```
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="kenhktsui/nano-phi-115M-v0.1")
pipe("I am a machine learning researcher. I work on", max_new_tokens=50, repetition_penalty=10.0)
# [{'generated_text': 'I am a machine learning researcher. I work on the problem of finding patterns in data, and it is not easy to find them all at once!\nThe first step was searching for pattern matching algorithms that are used by many people who have never seen an algorithm before (or even if they do).'}]
```
## Some metrics
- model
- hidden_size: 768
- num_key_value_heads: 8 (grouped query attention)
- num_attention_heads: 24
- num_hidden_layers: 6
- context length: 1024
- total params: 115M
- training:
- global steps: 14,000
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | kenhktsui/nano-phi-115M-v0.1|kenhktsui/nano-phi-115M-v0.1 (6000 steps)|[kenhktsui/nano-phi-115M-control-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-control-v0.1)|[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)|
|-----------------------|---------------------------|---------------------------|---------------------------|---------------------------|
| Model Para | 115M |115M |115M |2.7B |
| Dataset Size | 0.26B |0.26B |0.6B |250B |
| Training Token | 7B |3B|7B |1.4T |
| Context Length |1024 |1024|1024 |2048|
| Device |1xA100-40G|1xA100-40G|1xA100-40G |96xA100-80G|
| Training Time |2d4h |1d|2d4h |14d|
| Metric | kenhktsui/nano-phi-115M-v0.1|kenhktsui/nano-phi-115M-v0.1 (6000 steps)|[kenhktsui/nano-phi-115M-control-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-control-v0.1)|[microsoft/phi-2](https://huggingface.co/microsoft/phi-2) (Reproduced)|
|-----------------------|---------------------------|---------------------------|---------------------------|---------------------------|
| Avg. | 28.68 |29.03 | 28.75 |61.53 |
| ARC (25-shot) | 21.93 |22.27 | 21.67 |61.52 |
| HellaSwag (10-shot) | 27.87 |26.88 | 26.89 |75.13 |
| MMLU (5-shot) | 25.30 |25.01 | 24.76 |58.23 |
| TruthfulQA (0-shot) | 46.01 |48.03 | 47.69 |44.46 |
| Winogrande (5-shot) | 50.99 |52.01 | 51.46 |74.51 |
| GSM8K (5-shot) | 0.0 |0.0 | 0.0 |55.34 |
Details:
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/checkpoint-pegfss6f:v13,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 0, batch_size: 16
| Task |Version| Metric |Value | |Stderr|
|--------|------:|--------|-----:|---|-----:|
|arc_easy| 0|acc |0.4263|± |0.0101|
| | |acc_norm|0.3864|± |0.0100|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/checkpoint-pegfss6f:v13,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 25, batch_size: 16
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.1826|± |0.0113|
| | |acc_norm|0.2193|± |0.0121|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/checkpoint-pegfss6f:v13,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 10, batch_size: 16
| Task |Version| Metric |Value | |Stderr|
|---------|------:|--------|-----:|---|-----:|
|hellaswag| 0|acc |0.2733|± |0.0044|
| | |acc_norm|0.2787|± |0.0045|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/checkpoint-pegfss6f:v13,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 0, batch_size: 16
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.2521|± |0.0152|
| | |mc2 |0.4601|± |0.0154|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/checkpoint-pegfss6f:v13,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 5, batch_size: 16
| Task |Version| Metric |Value | |Stderr|
|-------------------------------------------------|------:|--------|-----:|---|-----:|
|hendrycksTest-abstract_algebra | 1|acc |0.2300|± |0.0423|
| | |acc_norm|0.2300|± |0.0423|
|hendrycksTest-anatomy | 1|acc |0.3111|± |0.0400|
| | |acc_norm|0.3111|± |0.0400|
|hendrycksTest-astronomy | 1|acc |0.2171|± |0.0336|
| | |acc_norm|0.2171|± |0.0336|
|hendrycksTest-business_ethics | 1|acc |0.2500|± |0.0435|
| | |acc_norm|0.2500|± |0.0435|
|hendrycksTest-clinical_knowledge | 1|acc |0.2226|± |0.0256|
| | |acc_norm|0.2226|± |0.0256|
|hendrycksTest-college_biology | 1|acc |0.2292|± |0.0351|
| | |acc_norm|0.2292|± |0.0351|
|hendrycksTest-college_chemistry | 1|acc |0.1700|± |0.0378|
| | |acc_norm|0.1700|± |0.0378|
|hendrycksTest-college_computer_science | 1|acc |0.2500|± |0.0435|
| | |acc_norm|0.2500|± |0.0435|
|hendrycksTest-college_mathematics | 1|acc |0.2500|± |0.0435|
| | |acc_norm|0.2500|± |0.0435|
|hendrycksTest-college_medicine | 1|acc |0.2023|± |0.0306|
| | |acc_norm|0.2023|± |0.0306|
|hendrycksTest-college_physics | 1|acc |0.3235|± |0.0466|
| | |acc_norm|0.3235|± |0.0466|
|hendrycksTest-computer_security | 1|acc |0.2600|± |0.0441|
| | |acc_norm|0.2600|± |0.0441|
|hendrycksTest-conceptual_physics | 1|acc |0.2511|± |0.0283|
| | |acc_norm|0.2511|± |0.0283|
|hendrycksTest-econometrics | 1|acc |0.2281|± |0.0395|
| | |acc_norm|0.2281|± |0.0395|
|hendrycksTest-electrical_engineering | 1|acc |0.2276|± |0.0349|
| | |acc_norm|0.2276|± |0.0349|
|hendrycksTest-elementary_mathematics | 1|acc |0.2460|± |0.0222|
| | |acc_norm|0.2460|± |0.0222|
|hendrycksTest-formal_logic | 1|acc |0.1508|± |0.0320|
| | |acc_norm|0.1508|± |0.0320|
|hendrycksTest-global_facts | 1|acc |0.3000|± |0.0461|
| | |acc_norm|0.3000|± |0.0461|
|hendrycksTest-high_school_biology | 1|acc |0.3387|± |0.0269|
| | |acc_norm|0.3387|± |0.0269|
|hendrycksTest-high_school_chemistry | 1|acc |0.2906|± |0.0319|
| | |acc_norm|0.2906|± |0.0319|
|hendrycksTest-high_school_computer_science | 1|acc |0.3100|± |0.0465|
| | |acc_norm|0.3100|± |0.0465|
|hendrycksTest-high_school_european_history | 1|acc |0.2182|± |0.0323|
| | |acc_norm|0.2182|± |0.0323|
|hendrycksTest-high_school_geography | 1|acc |0.3232|± |0.0333|
| | |acc_norm|0.3232|± |0.0333|
|hendrycksTest-high_school_government_and_politics| 1|acc |0.2021|± |0.0290|
| | |acc_norm|0.2021|± |0.0290|
|hendrycksTest-high_school_macroeconomics | 1|acc |0.2487|± |0.0219|
| | |acc_norm|0.2487|± |0.0219|
|hendrycksTest-high_school_mathematics | 1|acc |0.2741|± |0.0272|
| | |acc_norm|0.2741|± |0.0272|
|hendrycksTest-high_school_microeconomics | 1|acc |0.3319|± |0.0306|
| | |acc_norm|0.3319|± |0.0306|
|hendrycksTest-high_school_physics | 1|acc |0.3179|± |0.0380|
| | |acc_norm|0.3179|± |0.0380|
|hendrycksTest-high_school_psychology | 1|acc |0.2477|± |0.0185|
| | |acc_norm|0.2477|± |0.0185|
|hendrycksTest-high_school_statistics | 1|acc |0.4722|± |0.0340|
| | |acc_norm|0.4722|± |0.0340|
|hendrycksTest-high_school_us_history | 1|acc |0.2696|± |0.0311|
| | |acc_norm|0.2696|± |0.0311|
|hendrycksTest-high_school_world_history | 1|acc |0.2152|± |0.0268|
| | |acc_norm|0.2152|± |0.0268|
|hendrycksTest-human_aging | 1|acc |0.1973|± |0.0267|
| | |acc_norm|0.1973|± |0.0267|
|hendrycksTest-human_sexuality | 1|acc |0.2824|± |0.0395|
| | |acc_norm|0.2824|± |0.0395|
|hendrycksTest-international_law | 1|acc |0.2231|± |0.0380|
| | |acc_norm|0.2231|± |0.0380|
|hendrycksTest-jurisprudence | 1|acc |0.2222|± |0.0402|
| | |acc_norm|0.2222|± |0.0402|
|hendrycksTest-logical_fallacies | 1|acc |0.2822|± |0.0354|
| | |acc_norm|0.2822|± |0.0354|
|hendrycksTest-machine_learning | 1|acc |0.2768|± |0.0425|
| | |acc_norm|0.2768|± |0.0425|
|hendrycksTest-management | 1|acc |0.2039|± |0.0399|
| | |acc_norm|0.2039|± |0.0399|
|hendrycksTest-marketing | 1|acc |0.1966|± |0.0260|
| | |acc_norm|0.1966|± |0.0260|
|hendrycksTest-medical_genetics | 1|acc |0.2800|± |0.0451|
| | |acc_norm|0.2800|± |0.0451|
|hendrycksTest-miscellaneous | 1|acc |0.2746|± |0.0160|
| | |acc_norm|0.2746|± |0.0160|
|hendrycksTest-moral_disputes | 1|acc |0.2081|± |0.0219|
| | |acc_norm|0.2081|± |0.0219|
|hendrycksTest-moral_scenarios | 1|acc |0.2469|± |0.0144|
| | |acc_norm|0.2469|± |0.0144|
|hendrycksTest-nutrition | 1|acc |0.2647|± |0.0253|
| | |acc_norm|0.2647|± |0.0253|
|hendrycksTest-philosophy | 1|acc |0.1897|± |0.0223|
| | |acc_norm|0.1897|± |0.0223|
|hendrycksTest-prehistory | 1|acc |0.2377|± |0.0237|
| | |acc_norm|0.2377|± |0.0237|
|hendrycksTest-professional_accounting | 1|acc |0.2482|± |0.0258|
| | |acc_norm|0.2482|± |0.0258|
|hendrycksTest-professional_law | 1|acc |0.2464|± |0.0110|
| | |acc_norm|0.2464|± |0.0110|
|hendrycksTest-professional_medicine | 1|acc |0.4265|± |0.0300|
| | |acc_norm|0.4265|± |0.0300|
|hendrycksTest-professional_psychology | 1|acc |0.2614|± |0.0178|
| | |acc_norm|0.2614|± |0.0178|
|hendrycksTest-public_relations | 1|acc |0.1818|± |0.0369|
| | |acc_norm|0.1818|± |0.0369|
|hendrycksTest-security_studies | 1|acc |0.1959|± |0.0254|
| | |acc_norm|0.1959|± |0.0254|
|hendrycksTest-sociology | 1|acc |0.2289|± |0.0297|
| | |acc_norm|0.2289|± |0.0297|
|hendrycksTest-us_foreign_policy | 1|acc |0.2400|± |0.0429|
| | |acc_norm|0.2400|± |0.0429|
|hendrycksTest-virology | 1|acc |0.2048|± |0.0314|
| | |acc_norm|0.2048|± |0.0314|
|hendrycksTest-world_religions | 1|acc |0.2222|± |0.0319|
| | |acc_norm|0.2222|± |0.0319|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/checkpoint-pegfss6f:v13,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 5, batch_size: 16
| Task |Version|Metric|Value | |Stderr|
|----------|------:|------|-----:|---|-----:|
|winogrande| 0|acc |0.5099|± | 0.014|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/checkpoint-pegfss6f:v13,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 5, batch_size: 16
| Task |Version|Metric|Value | |Stderr|
|----------|------:|------|-----:|---|-----:|
|gsm8k | 0|acc | 0.0|± | 0.0|
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.66|
|AI2 Reasoning Challenge (25-Shot)|21.93|
|HellaSwag (10-Shot) |27.86|
|MMLU (5-Shot) |25.34|
|TruthfulQA (0-shot) |46.00|
|Winogrande (5-shot) |50.83|
|GSM8k (5-shot) | 0.00| |
ahdsoft/persian-sentence-transformer-news-wiki-pairs-v3 | ahdsoft | "2024-02-17T13:19:27Z" | 1,251 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-02-17T13:17:34Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ahdsoft/persian-sentence-transformer-news-wiki-pairs-v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ahdsoft/persian-sentence-transformer-news-wiki-pairs-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ahdsoft/persian-sentence-transformer-news-wiki-pairs-v3')
model = AutoModel.from_pretrained('ahdsoft/persian-sentence-transformer-news-wiki-pairs-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ahdsoft/persian-sentence-transformer-news-wiki-pairs-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 17122 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 4250,
"evaluator": "__main__.LossEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 5137,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lytang/MiniCheck-RoBERTa-Large | lytang | "2024-06-20T06:38:57Z" | 1,251 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2404.10774",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-14T23:35:52Z" | ---
language:
- en
pipeline_tag: text-classification
license: mit
---
# Model Summary
This is a fact-checking model from our work:
📃 [**MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents**](https://arxiv.org/pdf/2404.10774.pdf) ([GitHub Repo](https://github.com/Liyan06/MiniCheck))
The model is based on RoBERTA-Large that predicts a binary label - 1 for supported and 0 for unsupported.
The model is doing predictions on the *sentence-level*. It takes as input a document and a sentence and determine
whether the sentence is supported by the document: **MiniCheck-Model(document, claim) -> {0, 1}**
MiniCheck-RoBERTa-Large is fine tuned from the trained RoBERTA-Large model from AlignScore ([Zha et al., 2023](https://aclanthology.org/2023.acl-long.634.pdf))
on 14K synthetic data generated from scratch in a structed way (more details in the paper).
### Model Variants
We also have other two MiniCheck model variants:
- [lytang/MiniCheck-Flan-T5-Large](https://huggingface.co/lytang/MiniCheck-Flan-T5-Large)
- [lytang/MiniCheck-DeBERTa-v3-Large](https://huggingface.co/lytang/MiniCheck-DeBERTa-v3-Large)
### Model Performance
<p align="center">
<img src="./cost-vs-bacc.png" width="360">
</p>
The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact),
from 10 recent human annotated datasets on fact-checking and grounding LLM generations. MiniCheck-RoBERTa-Large outperform all
exisiting specialized fact-checkers with a similar scale by a large margin but is 2% worse than our best model MiniCheck-Flan-T5-Large, which
is on par with GPT-4 but 400x cheaper. See full results in our work.
Note: We only evaluated the performance of our models on real claims -- without any human intervention in
any format, such as injecting certain error types into model-generated claims. Those edited claims do not reflect
LLMs' actual behaviors.
# Model Usage Demo
Please first clone our [GitHub Repo](https://github.com/Liyan06/MiniCheck) and install necessary packages from `requirements.txt`.
### Below is a simple use case
```python
from minicheck.minicheck import MiniCheck
doc = "A group of students gather in the school library to study for their upcoming final exams."
claim_1 = "The students are preparing for an examination."
claim_2 = "The students are on vacation."
# model_name can be one of ['roberta-large', 'deberta-v3-large', 'flan-t5-large']
scorer = MiniCheck(model_name='roberta-large', device=f'cuda:0', cache_dir='./ckpts')
pred_label, raw_prob, _, _ = scorer.score(docs=[doc, doc], claims=[claim_1, claim_2])
print(pred_label) # [1, 0]
print(raw_prob) # [0.9581979513168335, 0.031335990875959396]
```
### Test on our [LLM-AggreFact](https://huggingface.co/datasets/lytang/LLM-AggreFact) Benchmark
```python
import pandas as pd
from datasets import load_dataset
from minicheck.minicheck import MiniCheck
# load 13K test data
df = pd.DataFrame(load_dataset("lytang/LLM-AggreFact")['test'])
docs = df.doc.values
claims = df.claim.values
scorer = MiniCheck(model_name='roberta-large', device=f'cuda:0', cache_dir='./ckpts')
pred_label, raw_prob, _, _ = scorer.score(docs=docs, claims=claims) # ~ 15 mins, depending on hardware
```
To evalaute the result on the benchmark
```python
from sklearn.metrics import balanced_accuracy_score
df['preds'] = pred_label
result_df = pd.DataFrame(columns=['Dataset', 'BAcc'])
for dataset in df.dataset.unique():
sub_df = df[df.dataset == dataset]
bacc = balanced_accuracy_score(sub_df.label, sub_df.preds) * 100
result_df.loc[len(result_df)] = [dataset, bacc]
result_df.loc[len(result_df)] = ['Average', result_df.BAcc.mean()]
result_df.round(1)
```
# Citation
```
@misc{tang2024minicheck,
title={MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents},
author={Liyan Tang and Philippe Laban and Greg Durrett},
year={2024},
eprint={2404.10774},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
O-TT/L3-8B-Stheno-v3.2-Q8_0-GGUF | O-TT | "2024-06-20T15:16:26Z" | 1,251 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Gryphe/Opus-WritingPrompts",
"dataset:Sao10K/Claude-3-Opus-Instruct-15K",
"dataset:Sao10K/Short-Storygen-v2",
"dataset:Sao10K/c2-Logs-Filtered",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-06-20T15:15:49Z" | ---
base_model: Sao10K/L3-8B-Stheno-v3.2
datasets:
- Gryphe/Opus-WritingPrompts
- Sao10K/Claude-3-Opus-Instruct-15K
- Sao10K/Short-Storygen-v2
- Sao10K/c2-Logs-Filtered
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# O-TT/L3-8B-Stheno-v3.2-Q8_0-GGUF
This model was converted to GGUF format from [`Sao10K/L3-8B-Stheno-v3.2`](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo O-TT/L3-8B-Stheno-v3.2-Q8_0-GGUF --hf-file l3-8b-stheno-v3.2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo O-TT/L3-8B-Stheno-v3.2-Q8_0-GGUF --hf-file l3-8b-stheno-v3.2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo O-TT/L3-8B-Stheno-v3.2-Q8_0-GGUF --hf-file l3-8b-stheno-v3.2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo O-TT/L3-8B-Stheno-v3.2-Q8_0-GGUF --hf-file l3-8b-stheno-v3.2-q8_0.gguf -c 2048
```
|
John6666/3x3x3mixxl-v2-sdxl | John6666 | "2024-06-29T00:35:13Z" | 1,251 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-29T00:22:29Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/464044?modelVersionId=605542).
|
timm/eva02_small_patch14_224.mim_in22k | timm | "2024-02-10T23:37:49Z" | 1,250 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"arxiv:2303.11331",
"arxiv:2303.15389",
"license:mit",
"region:us"
] | image-feature-extraction | "2023-03-31T04:55:19Z" | ---
license: mit
library_name: timm
tags:
- image-feature-extraction
- timm
---
# Model card for eva02_small_patch14_224.mim_in22k
An EVA02 feature / representation model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) by paper authors.
EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large).
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.6
- GMACs: 6.1
- Activations (M): 18.3
- Image size: 224 x 224
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_small_patch14_224.mim_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_small_patch14_224.mim_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 257, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
s3nh/Mistral_Sonyichi-7B-slerp | s3nh | "2024-03-04T12:39:38Z" | 1,250 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:SanjiWatsuki/Sonya-7B",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"base_model:SanjiWatsuki/Kunoichi-7B",
"license:openrail",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-05T14:43:17Z" | ---
language:
- en
license: openrail
library_name: transformers
tags:
- mergekit
- merge
base_model:
- SanjiWatsuki/Sonya-7B
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1
- SanjiWatsuki/Kunoichi-7B
pipeline_tag: text-generation
model-index:
- name: Mistral_Sonyichi-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.43
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Mistral_Sonyichi-7B-slerp
name: Open LLM Leaderboard
---
# merged

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Sonya-7B](https://huggingface.co/SanjiWatsuki/Sonya-7B)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: SanjiWatsuki/Sonya-7B
- layer_range: [0, 32]
model: SanjiWatsuki/Kunoichi-7B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Mistral_Sonyichi-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.52|
|AI2 Reasoning Challenge (25-Shot)|67.49|
|HellaSwag (10-Shot) |86.43|
|MMLU (5-Shot) |63.58|
|TruthfulQA (0-shot) |63.25|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |63.84|
|
neggles/dreamsim | neggles | "2024-02-12T05:28:55Z" | 1,250 | 1 | transformers | [
"transformers",
"dataset:PerceptionEval/DreamSim",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-02-08T07:04:13Z" | ---
license: mit
datasets:
- PerceptionEval/DreamSim
library_name: transformers
---
dreamsim! now in quasi-transformers quasi-diffusers form.
this probably won't work for you! but if it works for what i'm experimenting with, i'll try to get it upstreamed. |
lmms-lab/LLaVA-NeXT-Video-34B-DPO | lmms-lab | "2024-05-05T07:54:24Z" | 1,250 | 6 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"conversational",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-16T14:12:51Z" | ---
license: llama2
---
|
allagmaroua/my-distillbert-model | allagmaroua | "2024-06-06T23:39:10Z" | 1,250 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-28T08:54:35Z" | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
- f1
- recall
library_name: transformers
pipeline_tag: text-classification
---
# My Awesome Model
This is the model card for my Distill bert model.
Labels: ['advertising': 0, 'announcement': 1, 'financial information': 2, 'subjective opinion': 3]
|
seyonec/SMILES_tokenized_PubChem_shard00_50k | seyonec | "2021-05-20T21:10:29Z" | 1,249 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | Entry not found |
arnaudstiegler/long-layoutlm | arnaudstiegler | "2022-04-07T20:22:11Z" | 1,249 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlm",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-04-03T22:43:42Z" | Entry not found |
MU-NLPC/whisper-small-audio-captioning | MU-NLPC | "2024-03-13T11:52:19Z" | 1,249 | 5 | transformers | [
"transformers",
"pytorch",
"whisper",
"en",
"dataset:AudioSet",
"dataset:AudioCaps",
"dataset:Clotho-v2.1",
"arxiv:2305.09690",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2023-05-15T17:48:16Z" | ---
datasets:
- AudioSet
- AudioCaps
- Clotho-v2.1
metrics:
- SPICE
- CIDEr
- SPIDEr
- METEOR
- SacreBLEU
model-index:
- name: whisper-small-audio-captioning
results:
- task:
type: audio-captioning
name: Audio Captioning
dataset:
type: clotho-v2.1
name: Clotho
split: evaluation
metrics:
- type: SPICE
value: 0.1234
- type: CIDEr
value: 0.4142
- type: SPIDEr
value: 0.2687
- type: METEOR
value: 0.3781
- type: SacreBLEU
value: 15.76
license: cc-by-nc-4.0
language:
- en
---
# Model Card for Whisper Audio Captioning
A transformer encoder-decoder model for automatic audio captioning. As opposed to speech-to-text, captioning describes the content of audio clips, such as prominent sounds or environmental noises. This task has numerous practical applications, e.g., for providing access to audio information for people with hearing impairments or improving the searchability of audio content.
- **Model type:** Whisper encoder-decoder transformer
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
- **Parent Model:** openai/whisper-small
- **Resources for more information:**
- [GitHub Repo](https://github.com/prompteus/audio-captioning)
- [Technical Report](https://arxiv.org/abs/2305.09690)
## Usage
The model expects an audio clip (up to 30s) to the encoder as an input and information about caption style as forced prefix to the decoder.
Minimal example:
```python
# Load model
checkpoint = "MU-NLPC/whisper-small-audio-captioning"
model = WhisperForAudioCaptioning.from_pretrained(checkpoint)
tokenizer = transformers.WhisperTokenizer.from_pretrained(checkpoint, language="en", task="transcribe")
feature_extractor = transformers.WhisperFeatureExtractor.from_pretrained(checkpoint)
# Load and preprocess audio
input_file = "..."
audio, sampling_rate = librosa.load(input_file, sr=feature_extractor.sampling_rate)
features = feature_extractor(audio, sampling_rate=sampling_rate, return_tensors="pt").input_features
# Prepare caption style
style_prefix = "clotho > caption: "
style_prefix_tokens = tokenizer("", text_target=style_prefix, return_tensors="pt", add_special_tokens=False).labels
# Generate caption
model.eval()
outputs = model.generate(
inputs=features.to(model.device),
forced_ac_decoder_ids=style_prefix_tokens,
max_length=100,
)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
```
Example output:
*clotho > caption: Rain is pouring down and thunder is rumbling in the background.*
The style prefix influences the style of the caption. Model knows 3 styles: `audioset > keywords: `, `audiocaps > caption: `, and `clotho > caption: `. It was finetuned on Clotho and that is the indended "default" style.
WhisperTokenizer must be initialized with `language="en"` and `task="transcribe"`.
Our model class `WhisperForAudioCaptioning` can be found in our git repository or here on the HuggingFace Hub in the model repository. The class overrides default Whisper `generate` method to support forcing decoder prefix.
## Training details
The model was initialized by original speech-to-text `openai/whisper-small` weights. Then, it was pretrained on a mix of (1) subset of AudioSet with synthetic labels, (2) AudioCaps captioning dataset and (3) Clotho v2.1 captioning dataset. Finally, it was finetuned on Clotho v2.1 to focus the model on the specific style of the captions. For each traning input, the model was informed about the source of the data, so it can mimic the caption style in all 3 styles.
During pretraining, the ratio of samples in each batch was approximately 12:3:1 (AudioSet:AudioCaps:Clotho). The pretraining took 19800 steps with batch size 32 and learning rate 2e-5. Finetuning was done on Clotho only, and the model was trained for 1500 steps with batch size 32 and learning rate 4e-6. All layers except *fc1* layers were frozen during finetuning.
For more information about the training regime, see the [technical report](TODO).
## Evaluation details
Metrics reported in the metadata were computed on Clotho v2.1 test split with captions generated using a beam search with 5 beams.
| | whisper-tiny | whisper-small | whisper-large-v2 |
|----------------------|--------------|---------------|------------------|
| SacreBLEU | 13.77 | 15.76 | 16.50 |
| METEOR | 0.3452 | 0.3781 | 0.3782 |
| CIDEr | 0.3404 | 0.4142 | 0.4331 |
| SPICE | 0.1077 | 0.1234 | 0.1257 |
| SPIDEr | 0.2240 | 0.2687 | 0.2794 |
## Limitations
The captions generated by the model can be misleading or not truthful, even if they appear convincing. The hallucination occurs especially in domains that were not present in the finetuning data.
While the original speech-to-text checkpoints by OpenAI were trained on multilingual data, our training contains only English captions, and therefore is not expected for the model to support other languages.
## Licence
The model weights are published under non-commercial license CC BY-NC 4.0 as the model was finetuned on a dataset for non-commercial use.
## Contact
If you'd like to chat about this, please get in touch with is via email at kadlcik`<at>`mail.muni.cz or ahajek`<at>`mail.muni.cz.
|
gzeskas/test-model-upload-1 | gzeskas | "2023-08-01T15:02:43Z" | 1,249 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"anime",
"en",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-24T08:48:29Z" | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- anime
inference: false
---
# Dream Shaper
## Official Repository
Read more about this model here: https://civitai.com/models/4384/dreamshaper
Also please support by giving 5 stars and a heart, which will notify new updates.
Please consider supporting me on Patreon or buy me a coffee
- https://www.patreon.com/Lykon275
- https://snipfeed.co/lykon
You can run this model on:
- https://huggingface.co/spaces/Lykon/DreamShaper-webui
- Mage.space, sinkin.ai and more |
not-lain/soc2 | not-lain | "2024-06-20T01:01:00Z" | 1,248 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T00:50:10Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** not-lain
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CHE-72/Phi-3-medium-128k-instruct-Q8_0-GGUF | CHE-72 | "2024-06-21T19:57:38Z" | 1,248 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"region:us"
] | text-generation | "2024-06-21T19:56:39Z" | ---
base_model: microsoft/Phi-3-medium-128k-instruct
language:
- multilingual
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# CHE-72/Phi-3-medium-128k-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q8_0-GGUF --hf-file phi-3-medium-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q8_0-GGUF --hf-file phi-3-medium-128k-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q8_0-GGUF --hf-file phi-3-medium-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q8_0-GGUF --hf-file phi-3-medium-128k-instruct-q8_0.gguf -c 2048
```
|
kirp/kosmos2_5 | kirp | "2024-07-02T15:40:54Z" | 1,247 | 4 | transformers | [
"transformers",
"safetensors",
"kosmos-2.5",
"text2text-generation",
"en",
"arxiv:2309.11419",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-14T15:09:05Z" | ---
language: en
license: mit
---
# Under testing
# Kosmos-2.5
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/kosmos-2.5)
## Model description
Kosmos-2.5 is a multimodal literate model for machine reading of text-intensive images. Pre-trained on large-scale text-intensive images, Kosmos-2.5 excels in two distinct yet cooperative transcription tasks: (1) generating spatially-aware text blocks, where each block of text is assigned its spatial coordinates within the image, and (2) producing structured text output that captures styles and structures into the markdown format. This unified multimodal literate capability is achieved through a shared decoder-only auto-regressive Transformer architecture, task-specific prompts, and flexible text representations. We evaluate Kosmos-2.5 on end-to-end document-level text recognition and image-to-markdown text generation. Furthermore, the model can be readily adapted for any text-intensive image understanding task with different prompts through supervised fine-tuning, making it a general-purpose tool for real-world applications involving text-rich images. This work also paves the way for the future scaling of multimodal large language models.
[Kosmos-2.5: A Multimodal Literate Model](https://arxiv.org/abs/2309.11419)
## NOTE:
Since this is a generative model, there is a risk of **hallucination** during the generation process, and it **CAN NOT** guarantee the accuracy of all OCR/Markdown results in the images.
## Use with transformers:
```bash
pip install git+https://github.com/tic-top/transformers.git
```
```python
from PIL import Image
import requests
import torch
from transformers import AutoProcessor, AutoModelForVision2Seq
import re
repo = "kirp/kosmos2_5"
device = "cuda:0"
dtype = torch.bfloat16
model = AutoModelForVision2Seq.from_pretrained(repo, device_map=device, torch_dtype=dtype)
processor = AutoProcessor.from_pretrained(repo)
url = "https://huggingface.co/kirp/kosmos2_5/resolve/main/receipt_00008.png"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "<ocr>" # <md>
inputs = processor(text=prompt, images=image, return_tensors="pt")
height, width = inputs.pop("height"), inputs.pop("width")
raw_width, raw_height = image.size
scale_height = raw_height / height
scale_width = raw_width / width
inputs = {k: v.to(device) if v is not None else None for k, v in inputs.items()}
inputs["flattened_patches"] = inputs["flattened_patches"].to(dtype)
generated_ids = model.generate(
**inputs,
max_new_tokens=1024,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
def postprocess(y, scale_height, scale_width):
y = y.replace(prompt, "")
if "<md>" in prompt:
return y
pattern = r"<bbox><x_\d+><y_\d+><x_\d+><y_\d+></bbox>"
bboxs_raw = re.findall(pattern, y)
lines = re.split(pattern, y)[1:]
bboxs = [re.findall(r"\d+", i) for i in bboxs_raw]
bboxs = [[int(j) for j in i] for i in bboxs]
info = ""
for i in range(len(lines)):
box = bboxs[i]
x0, y0, x1, y1 = box
if not (x0 >= x1 or y0 >= y1):
x0 = int(x0 * scale_width)
y0 = int(y0 * scale_height)
x1 = int(x1 * scale_width)
y1 = int(y1 * scale_height)
info += f"{x0},{y0},{x1},{y0},{x1},{y1},{x0},{y1},{lines[i]}"
return info
output_text = postprocess(generated_text[0], scale_height, scale_width)
print(output_text)
```
```text
55,595,71,595,71,629,55,629,1
82,595,481,595,481,635,82,635,[REG] BLACK SAKURA
716,590,841,590,841,629,716,629,45,455
55,637,71,637,71,672,55,672,1
82,637,486,637,486,675,82,675,COOKIE DOH SAUCES
818,632,843,632,843,668,818,668,0
51,683,71,683,71,719,51,719,1
82,683,371,683,371,719,82,719,NATA DE COCO
820,677,845,677,845,713,820,713,0
32,770,851,770,851,811,32,811,Sub Total 45,455
28,811,853,811,853,858,28,858,PB1 (10%) 4,545
28,857,855,857,855,905,28,905,Rounding 0
24,905,858,905,858,956,24,956,Total 50,000
17,1096,868,1096,868,1150,17,1150,Card Payment 50,000
```
## Citation
If you find Kosmos-2.5 useful in your research, please cite the following paper:
```
@article{lv2023kosmos,
title={Kosmos-2.5: A multimodal literate model},
author={Lv, Tengchao and Huang, Yupan and Chen, Jingye and Cui, Lei and Ma, Shuming and Chang, Yaoyao and Huang, Shaohan and Wang, Wenhui and Dong, Li and Luo, Weiyao and others},
journal={arXiv preprint arXiv:2309.11419},
year={2023}
}
```
## License
The content of this project itself is licensed under the [MIT](https://github.com/microsoft/unilm/blob/master/kosmos-2.5/LICENSE)
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct)
|
ChrisWilson011016/5E4paeSqk21XXoRdGbisC8jgtxqPAMgD1UhoYRtqDHPbUMAL_vgg | ChrisWilson011016 | "2024-03-04T18:55:39Z" | 1,246 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-24T15:20:44Z" | Entry not found |
PrunaAI/OpenBioLLM-Llama3-70B-GGUF-smashed | PrunaAI | "2024-04-29T09:42:11Z" | 1,246 | 3 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | "2024-04-28T23:00:59Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the aaditya/OpenBioLLM-Llama3-70B model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/OpenBioLLM-Llama3-70B-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/OpenBioLLM-Llama3-70B-GGUF-smashed OpenBioLLM-Llama3-70B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/OpenBioLLM-Llama3-70B-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/OpenBioLLM-Llama3-70B-GGUF-smashed OpenBioLLM-Llama3-70B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m OpenBioLLM-Llama3-70B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./OpenBioLLM-Llama3-70B.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./OpenBioLLM-Llama3-70B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
google/bert_for_seq_generation_L-24_bbc_encoder | google | "2020-09-11T07:57:22Z" | 1,245 | 1 | transformers | [
"transformers",
"pytorch",
"bert-generation",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | Entry not found |
TimKond/S-PubMedBert-MedQuAD | TimKond | "2022-06-14T21:17:34Z" | 1,245 | 5 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2022-06-09T14:50:47Z" | ---
license: mit
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# S-PubMedBert-MedQuAD
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('TimKond/S-PubMedBert-MedQuAD')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('TimKond/S-PubMedBert-MedQuAD')
model = AutoModel.from_pretrained('TimKond/S-PubMedBert-MedQuAD')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.DataLoader` of length 82590 with parameters:
```
{'batch_size': 2, 'shuffle':True}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss` with parameters:
```
{'num_labels': 2, 'sentence_embedding_dimension': '768'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": None,
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 8259,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Shinhati2023/Big_World | Shinhati2023 | "2023-12-01T19:47:47Z" | 1,245 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2023-11-25T02:57:17Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 'A beautiful lady '
parameters:
negative_prompt: >-
deformed, bad anatomy, disfigured, poorly drawn face, mutation, mutated,
extra limb, ugly, disgusting, poorly drawn hands, missing limb, floating
limbs, disconnected limbs, malformed hands, blurry,.
output:
url: images/IMG_20231117_154148_242.webp
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: openrail++
---
# Big_world
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shinhati2023/Big_World/tree/main) them in the Files & versions tab.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.