modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
bongchoi/MoMo-70B-V1.0 | bongchoi | "2023-11-12T03:09:38Z" | 1,058 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-10T08:10:00Z" | ---
license: llama2
language:
- en
library_name: transformers
---
## Dataset Details
### Used Datasets
- Orca-style dataset
- Alpaca-style dataset
- No other dataset was used except for the dataset mentioned above
- No benchmark test set or the training set are used
## Prompt Template
### Alpaca-style
|
KBNIT/Llama-3-8B-kolon-ITD-5-v1.0 | KBNIT | "2024-04-23T04:39:33Z" | 1,058 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T01:58:33Z" | ---
license: cc-by-nc-4.0
language:
- ko
- en
---
## Base Model:
We made a LLM model with meta-llama/Meta-Llama-3-8B
## Model Description
We use QDoRA(256, 64) and lr=1e-5, NEFTune=3
## Train Detail
Our changed private data used, and 5 epoch train
## Others
We are making LLM model for Kolon ! |
bartowski/Einstein-v6.1-Llama3-8B-GGUF | bartowski | "2024-05-02T19:53:53Z" | 1,058 | 5 | null | [
"gguf",
"axolotl",
"generated_from_trainer",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"llama",
"llama3",
"text-generation",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"model-index",
"region:us"
] | text-generation | "2024-05-02T19:36:54Z" | ---
language:
- en
license: other
tags:
- axolotl
- generated_from_trainer
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- llama
- llama3
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
- totally-not-an-llm/EverythingLM-data-V3
- HuggingFaceH4/no_robots
- OpenAssistant/oasst_top1_2023-08-25
- WizardLM/WizardLM_evol_instruct_70k
model-index:
- name: Einstein-v6.1-Llama3-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Einstein-v6.1-Llama3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Einstein-v6.1-Llama3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Einstein-v6.1-Llama3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Einstein-v6.1-Llama3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Einstein-v6.1-Llama3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Einstein-v6.1-Llama3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Einstein-v6.1-Llama3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Einstein-v6.1-Llama3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Einstein-v6.1-Llama3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Einstein-v6.1-Llama3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Einstein-v6.1-Llama3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Einstein-v6.1-Llama3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Einstein-v6.1-Llama3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-GGUF --include "Einstein-v6.1-Llama3-8B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-GGUF --include "Einstein-v6.1-Llama3-8B-Q8_0.gguf/*" --local-dir Einstein-v6.1-Llama3-8B-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Einstein-v6.1-Llama3-8B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
junxinyijiu/llama3 | junxinyijiu | "2024-06-25T03:40:58Z" | 1,058 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-05-16T00:55:40Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ngoan/NgoanYi | ngoan | "2023-12-10T19:35:12Z" | 1,057 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"feature-extraction",
"text-generation-inference",
"Text Generation",
"Yi",
"ngoan",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"dataset:TIGER-Lab/MathInstruct",
"dataset:SUSTech/prm800k",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-09T06:28:41Z" | ---
license: other
datasets:
- garage-bAInd/Open-Platypus
- TIGER-Lab/MathInstruct
- SUSTech/prm800k
language:
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
- llama
- Text Generation
- Yi
- ngoan
---
Model Card for NgoanYI
# Overview
Model Name: NgoanYI
Base Model: Yi-34B
Version: 1.0
Developed by: Pham Van Ngoan
# Description
NgoanYI is a specialized chat model, fine-tuned on the Yi-34B architecture. It is designed to provide enhanced performance in conversation-based applications. The model has been trained specifically on three distinct datasets:
- Garage-bAInd/Open-Platypus
- TIGER-Lab/MathInstruct
- SUSTech/prm800k
# Training
NgoanYI's training was conducted over one epoch, with the intent of testing and validating the model's initial capabilities. This approach provides insights into the model's performance, while setting the stage for further refinements in future iterations.
# Contact Information
For more information, queries, or feedback regarding the NgoanYI model, please contact:
Email: [email protected] |
dreamgen/opus-v1-34b | dreamgen | "2024-03-29T15:40:08Z" | 1,057 | 14 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"unsloth",
"axolotl",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-21T14:01:35Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
license: cc-by-nc-nd-4.0
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1-34b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1-34b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Official SillyTavern documentation for DreamGen](https://docs.sillytavern.app/usage/api-connections/dreamgen/) -- applies to both the API an local models
- SillyTavern (staging) comes with built-in DreamGen preset for RP
- Other presets can be found [here](https://huggingface.co/dreamgen/opus-v1-34b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- Character cards specifically rewritten for the built-in DreamGen preset:
- [Seraphina](configs/silly_tavern/cards/Seraphina.png) (based on the default Seraphina card)
- [Lara Lightland](configs/silly_tavern/cards/LaraLightland.png) (based on the card by Deffcolony)
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted. |
KBNIT/nox-solar-10.7b-v4-kolon-ITD-5-v2.1 | KBNIT | "2024-04-11T04:06:26Z" | 1,057 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-11T00:39:23Z" | ---
license: cc-by-nc-4.0
language:
- ko
- en
---
## Base Model:
We made a LLM model with davidkim205/nox-solar-10.7b-v4
## Model Description
We use QDoRA(256, 64) and NEFTune on LLM Fine-tuning lr = 1e-5, NEFTune = 5
## Train Detail
Our changed private data used, 5 epoch train, and parameter changed
## Others
We are making LLM model for Kolon ! |
QuantFactory/InstructLM-500M-GGUF | QuantFactory | "2024-06-22T17:45:55Z" | 1,057 | 3 | null | [
"gguf",
"text-generation",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:instruction-pretrain/ft-instruction-synthesizer-collection",
"arxiv:2406.14491",
"arxiv:2309.09530",
"base_model:instruction-pretrain/InstructLM-500M",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-22T10:44:16Z" | ---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- instruction-pretrain/ft-instruction-synthesizer-collection
language:
- en
pipeline_tag: text-generation
base_model: instruction-pretrain/InstructLM-500M
---
# QuantFactory/InstructLM-500M-GGUF
This is quantized version of [instruction-pretrain/InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M) created using llama.cpp
# Model Description
## Instruction Pre-Training: Language Models are Supervised Multitask Learners
This repo contains the **general models pre-trained from scratch** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
</p>
## Resources
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
- General Models Pre-Trained from Scratch:
- [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
- [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B)
- Domain-Specific Models Pre-Trained from Llama3-8B:
- [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
- [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
## General Pre-Training From Scratch
We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch.
To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)
1. Setup dependencies:
```bash
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
2. Evalaute:
```bash
MODEL=instruction-pretrain/InstructLM-500M
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True
accelerate launch -m lm_eval --model hf \
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
--gen_kwargs do_sample=False \
--tasks piqa,hellaswag,winogrande \
--batch_size auto \
--num_fewshot 0
accelerate launch -m lm_eval --model hf \
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
--gen_kwargs do_sample=False \
--tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \
--batch_size auto \
--num_fewshot 5
```
## Model Citation
If you find our work helpful, please cite us:
[AdaptLLM](https://huggingface.co/papers/2309.09530)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
``` |
timm/coatnet_0_rw_224.sw_in1k | timm | "2023-05-10T23:41:56Z" | 1,056 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-01-20T21:24:19Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coatnet_0_rw_224.sw_in1k
A timm specific CoAtNet image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 27.4
- GMACs: 4.4
- Activations (M): 18.7
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coatnet_0_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_0_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_0_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
timm/regnety_004.tv2_in1k | timm | "2024-02-10T23:33:10Z" | 1,056 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2003.13678",
"license:bsd-3-clause",
"region:us"
] | image-classification | "2023-03-21T06:37:01Z" | ---
license: bsd-3-clause
library_name: timm
tags:
- image-classification
- timm
---
# Model card for regnety_004.tv2_in1k
A RegNetY-400MF image classification model. Pretrained on ImageNet-1k by torchvision contributors (see ImageNet1K-V2 weight details https://github.com/pytorch/vision/issues/3995#new-recipe).
The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* configurable output stride (dilation)
* configurable activation and norm layers
* option for a pre-activation bottleneck block used in RegNetV variant
* only known RegNetZ model definitions with pretrained weights
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 4.3
- GMACs: 0.4
- Activations (M): 3.9
- Image size: 224 x 224
- **Papers:**
- Designing Network Design Spaces: https://arxiv.org/abs/2003.13678
- **Original:** https://github.com/pytorch/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('regnety_004.tv2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_004.tv2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 48, 56, 56])
# torch.Size([1, 104, 28, 28])
# torch.Size([1, 208, 14, 14])
# torch.Size([1, 440, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_004.tv2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 440, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`.
|model |img_size|top1 |top5 |param_count|gmacs|macts |
|-------------------------|--------|------|------|-----------|-----|------|
|[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 |
|[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 |
|[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 |
|[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 |
|[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49|
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 |
|[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 |
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 |
|[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 |
|[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83|
|[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 |
|[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 |
|[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 |
|[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 |
|[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 |
|[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 |
|[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 |
|[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 |
|[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 |
|[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 |
|[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 |
|[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 |
|[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 |
|[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 |
|[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 |
|[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 |
|[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 |
|[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 |
|[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 |
|[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 |
|[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 |
|[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 |
|[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 |
|[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 |
|[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 |
|[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 |
|[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 |
|[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 |
|[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 |
|[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 |
|[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 |
|[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 |
|[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 |
|[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 |
|[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 |
|[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 |
|[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 |
|[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 |
|[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 |
|[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 |
## Citation
```bibtex
@InProceedings{Radosavovic2020,
title = {Designing Network Design Spaces},
author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r},
booktitle = {CVPR},
year = {2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Duxiaoman-DI/XuanYuan-70B | Duxiaoman-DI | "2023-11-02T05:37:23Z" | 1,056 | 44 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-22T02:30:20Z" | ---
license: llama2
---
XuanYuan-70B是基于Llama2-70b模型进行中文增强的一系列金融大模型,包含大量中英文语料增量预训练之后的底座模型以及使用高质量指令数据进行对齐的chat模型。
我们的目标是:大模型通用能力尽可能保持的同时,金融领域能力得到明显提升,服务于金融领域。
目前发布的模型和下载链接如下:
| | 基座模型 | Chat模型 | 8-bit量化Chat模型 | 4-bit量化Chat模型 |
| --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| XuanYuan-70B-8k | 🤗 [XuanYuan-70B](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B) | 🤗 [XuanYuan-70B-Chat](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat) | 🤗 [XuanYuan-70B-Chat-8bit](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat-8bit ) | 🤗 [XuanYuan-70B-Chat-4bit](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat-4bit) |
# 模型介绍
考虑到金融场景下存在非常多长文本的业务,基于我们高效的分布式训练框架,我们将模型的上下文长度在预训练阶段从4k扩充到了8k和16k,据我们所知,这也是首个在70B参数量级上达到8k及以上上下文长度的开源大模型。
具体细节参考:[XuanYuan-70B](https://github.com/Duxiaoman-DI/XuanYuan)
## 基座模型预训练
(1)**数据质量**
- 我们设计了一套数据清洗流水线,精心准备了各类通用数据(互联网网页、百科、论坛、社交媒体、问答等)以及金融相关数据(金融资讯、公司公告、金融百科、金融书籍、证书试题等)高质量数据
- 中英数据:首先llama2的英文能力足够优秀,所以为了保证英文能力不降,我们扩充词表之后,使用高质量的中英语料进行增量预训练,其中中英配比为3:1;
- 通用金融数据:为了提升模型在金融能力上效果,预训练过程中通用语料与金融预料比例为9:1,且随着训练进行,逐步提升金融语料的占比。
(2)**模型训练**
- 训练效率:我们采取了一系列的加速优化策略, 包括对底层数据加载和分布式训练框架的多处优化,使用flash attention2替代self-attention模块,使用基于CPP CUDA的Fused算子替代原始llama的python实现等
- 上下文长度:基于上述的优化方式,同时考虑到金融场景长上下文情景较多,我们能够在预训练阶段把llama2原始上下文4k的长度扩展到8k和16k;
我们在100台8卡A800(80G)的GPU集群中,训练情况如下:
| 模型 | 上下文长度 | 吞吐量 | 显卡利用 |
| ------------ | ---------- | ---------------- | -------- |
| XuanYuan-70B | 8192 | 340 tokens/s/gpu | 190TFOPS |
备注:(1)训练没有开梯度累计;(2)原始llama2-70b在4k上下文长度下的的吞吐量为323 tokens/s/gpu,说明我们的训练效率达到当前领先水平。
## Chat模型指令微调
基于上述的XuanYuan-70B基座模型,我们进行了详细的指令微调,基座使模型具备对话和遵循人类指令的能力。
我们采取了两阶段的指令微调,具体来说:
- 第一阶段:使用开源的大量的指令数据对基座模型来进行训练,这一部分我们收集了约10M条开源的多语种指令微调数据,并行清洗与深度过滤。这一阶段的目的是为了覆盖指令的多样性,提升模型指令遵循能力。
- 第二阶段:使用自研的高质量的指令数据来继续进行指令微调训练。这一阶段,我们精心自研约20万条通用+金融的指令微调数据,其中大部分数据均做了校验、改写来保证质量。 这一阶段是能够更加使得模型根据不同的需求和侧重来进行最后的训练。
我们自研的指令微调数据预期模型能够在通用对话能力保留的同时,更加侧重金融领域的问答。具体来说,通用指令数据分为以下几个大类:常识百科、代码编程、逻辑推理、数学计算、创意生成、安全无害、摘要提取、翻译等。其中每一大类下又设计了多个子类,来尽可能保证指令数据的多样性和丰富度。
对于金融领域的指令数据,我们进行了更加详细的子类划分,来覆盖金融经济的各个领域。在训练阶段,我们采取的配比为:通用指令数据与金融指令数据配比为4:1。
在训练过程中,我们同样保持8k的上下文长度,未采取外推的措施来提升上下文。后续我们将继续在预训练阶段来提升上下文长度。 同时训练数据中的question-answer pair,我们仅对answer部分计算损失。
# 快速使用
基座模型、Chat模型以及8-bit和4bit量化Chat模型均已发布在Hugging Face。下面我们给出基座模型和Chat模型的推理部署使用方法。
## 依赖安装
```
torch >= 2.0
transformers >= 4.33.1
accelerate
sentencepiece
bitsandbytes(8bit量化所需)
optimum(4bit量化所需)
auto-gptq(4bit量化所需)
vllm(推理加速所需)
```
资源需求:
- 对于基座模型和Chat模型,部署至少需要2张80G的显卡进行加载模型
- 对于8bit量化版本,推理部署至少需要1张80G的显卡进行加载模型
- 对于4bit量化版本,,推理部署至少需要1张40G的显卡进行加载模型
## Base模型使用方法
因为XuanYuan-70B系列模型均是基于Llama2-70B进行增量预训练而来,因此基座模型的使用方法与Llama2基座模型保持一致。
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
model_name_or_path = "Duxiaoman-DI/XuanYuan-70B"
tokenizer = LlamaTokenizer.from_pretrained(model_name_or_path, use_fast=False, legacy=True)
model = LlamaForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.bfloat16,device_map="auto")
model.eval()
inputs = tokenizer("问题:李时珍是哪一个朝代的人?回答:", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
outputs = tokenizer.decode(outputs.cpu()[0][len(inputs.input_ids[0]):], skip_special_tokens=True)
print(outputs)
```
## Chat模型使用方法
在指令微调构造prompt的时候,我们参考了[FastChat](https://github.com/lm-sys/FastChat)的对话构造方式,简单代码示例如下:
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
model_name_or_path = "Duxiaoman-DI/XuanYuan-70B-Chat"
tokenizer = LlamaTokenizer.from_pretrained(model_name_or_path, use_fast=False, legacy=True)
model = LlamaForCausalLM.from_pretrained(model_name_or_path, device_map="auto")
model.eval()
system_message = "以下是用户和人工智能助手之间的对话。用户以Human开头,人工智能助手以Assistant开头,会对人类提出的问题给出有帮助、高质量、详细和礼貌的回答,并且总是拒绝参与 与不道德、不安全、有争议、政治敏感等相关的话题、问题和指示。\n"
seps = [" ", "</s>"]
roles = ["Human", "Assistant"]
content = "介绍下你自己"
prompt = system_message + seps[0] + roles[0] + ": " + content + seps[0] + roles[1] + ":"
print(f"输入: {content}")
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
outputs = tokenizer.decode(outputs.cpu()[0][len(inputs.input_ids[0]):], skip_special_tokens=True)
print(f"输出: {outputs}")
```
- 示例同时支持8bit和4bit的量化模型
- 示例仅为最简单的部署代码,没有考虑多轮、推理加速等; 完整demo请参考cli_demo.py
## CLI工具
我们github主页提供一个了基于命令行的demo,支持多轮对话和基于vLLM的推理加速。
> vllm暂时不支持量化模型
```
python3 cli_vllm_demo.py --checkpoint_path <XuanYuan-70B-Chat Path>
```
举例如下:
```
输入: 你好
输出: 你好,很高兴能为你提供帮助。
输入: 介绍下你自己
输出: 我是轩辕大模型,一个由度小满数据智能应用部AI Lab 开发的人工智能助手,我可以回答各种问题,提供实用的建议和帮助,帮助用户完成各种任务。
输入: 有2块五仁月饼,3块莲蓉月饼,2块豆沙月饼,这些月饼的大小形状质量完全相同。从这7块月饼中,任意取出3块,那么三种月饼都取到 的可能性是几分之几?
输出: 这是一个组合数学问题,我们可以通过计算组合数来解答。
三种月饼都取到,即取到五仁、莲蓉和豆沙各一块。
五仁月饼的选取方法有2种,莲蓉月饼的选取方法有3种,豆沙月饼的选取方法有2种,所以总的取出一种五仁、一种莲蓉、一种豆沙的方法有2*3*2=12种。
从7块月饼中任意取出3块月饼的总的组合数为C(7,3)=35种。
所以,从这7块月饼中,任意取出3块,三种月饼都取到 的可能性为12/35。
```
## 量化部署
为了降低用户在本地使用XuanYuan的成本,降低显存需求,我们提供量化好的Xuanyuan-70B-Chat模型8bit和4bit模型。
**8bit离线量化模型**
在8bit量化算法上,我们使用目前社区广泛使用的[bitsandbytes](https://github.com/TimDettmers/bitsandbytes)库。该库包含LLM.int8()量化算法的实现以及一系列量化的工具,
同时该方法已在transformers库里做了集成,使用较为容易。经过我们的测试,8bit量化可以近乎无损。
**4bit离线量化模型**
在4bit量化算法上,我们使用[auto-gptq](https://github.com/PanQiWei/AutoGPTQ)工具。该库实现的GPTQ算法是目前4bit量化最受欢迎的方法,
同时该方法在transformers库和optimum库里做了集成,使用较为容易。
下表给出了不同模型所需显存,以及在三个评测基准上CEVAL,CMMLU和MMLU上效果:
| 模型 | 显存 | CEVAL | CMMLU | MMLU |
| ---------------------- | ---- | ----- | ----- | ---- |
| XuanYuan-70B-Chat | 129G | 62.15 | 60.41 | 65.3 |
| XuanYuan-70B-Chat-8bit | 65G | 62.25 | 59.99 | 65.0 |
| XuanYuan-70B-Chat-4bit | 35G | 60.94 | 58.76 | 63.0 |
可以看出:
- 8bit和4bit的量化模型相比原始float16的模型,空间分别降低为原来的1/2和1/4。能够显著降低硬件需求。
- 8bit的量化模型相原始float16的模型,效果近乎无损,4bit的量化模型,大概下降2个点左右。
- 此外,我们也对量化版本的Chat模型进行对话人工评测,结论与评测基准类似。
使用量化模请参考上面的Chat模型使用方法的示例代码。 |
pavfi-at-m/gpt2GPTQ | pavfi-at-m | "2023-10-27T11:17:02Z" | 1,056 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"general",
"history",
"business",
"en",
"dataset:wikitext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T14:37:30Z" | ---
license: mit
datasets:
- wikitext
language:
- en
library_name: transformers
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- general
- history
- business
---
Quantized GPT2 model.
Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on BookCorpus, a dataset of over 7,000 unpublished fiction books from various genres, and trained on a dataset of 8 million web pages. |
Kooten/DaringMaid-13B | Kooten | "2023-12-15T18:16:48Z" | 1,056 | 12 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-15T11:51:10Z" | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# DaringMaid-13B
My goal was to make a Noromaid that's smarter and better at following instructions.
After trying a bunch of different recipes I think this one turned out pretty good
- I used [sequelbox/DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) as a base to as its supposed "improve overall knowledge, precise communication, conceptual understanding, and technical skill" over the base llama2.
- [NeverSleep/Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) of course.
- [Undi95/Utopia](https://huggingface.co/Undi95/Utopia-13B) has been recommended again recently and its still really good so in the mixer it goes
- I liked [tavtav/Rose](https://huggingface.co/tavtav/Rose-20B) so i threw in a bit of [CalderaAI/Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2)
- There was recently a model that tried to pass itself off as [Gryphe/MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b), i made a merge with that model before it was revealed to be MythoMax and it turned out pretty good so i used it.
The .yml config files for mergekit with the exact merges can be found in the ["Recipe"](https://huggingface.co/Kooten/DaringMaid-13B/tree/main/Recipe) folder in the [fp16 repo](https://huggingface.co/Kooten/DaringMaid-13B)
# Quants
EXL2: [8bpw](https://huggingface.co/Kooten/DaringMaid-13B-8bpw-exl2), [5bpw](https://huggingface.co/Kooten/DaringMaid-13B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/DaringMaid-13B-4bpw-exl2)
[GGUF](https://huggingface.co/Kooten/DaringMaid-13B-GGUF):
[Q3_K_M](https://huggingface.co/Kooten/DaringMaid-13B-GGUF/blob/main/DaringMaid-13B-Q3_K_M.gguf) - [Q4_K_M](https://huggingface.co/Kooten/DaringMaid-13B-GGUF/blob/main/DaringMaid-13B-Q4_K_M.gguf) - [Q5_K_M](https://huggingface.co/Kooten/DaringMaid-13B-GGUF/blob/main/DaringMaid-13B-Q5_K_M.gguf)
## Prompt template:
I have been using Undi/Ikaris SillyTavern presets for Noromaid: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request. Do not include descriptions of non-visual qualities such as personality, movements, scents, mental traits, or anything which could not be seen in a still photograph. Do not write in full sentences. Prefix your description with the phrase 'full body portrait,'
### Instruction:
{prompt}
### Response:
```
### Contact
Kooten on discord.
|
Inforup982/Harsha-Hermes-2.5-Mistral-7B_safetensors | Inforup982 | "2024-01-16T03:42:09Z" | 1,056 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-16T03:12:48Z" | ---
license: apache-2.0
---
**Harsha-Hermes-2.5-Mistral-7B**
Harsha-Hermes-2.5-Mistral-7B is a DPO fine-tune of teknium/OpenHermes-2.5-Mistral-7B using the Intel/orca_dpo_pairs preference dataset and DPO notebook from Maxime Labonne.
|
mradermacher/MultiVerse_70B-GGUF | mradermacher | "2024-05-06T05:22:12Z" | 1,056 | 4 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MTSAIR/MultiVerse_70B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-04-02T19:02:24Z" | ---
base_model: MTSAIR/MultiVerse_70B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
license_name: qwen
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/MTSAIR/MultiVerse_70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MultiVerse_70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q2_K.gguf) | Q2_K | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ3_XS.gguf) | IQ3_XS | 31.5 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ3_S.gguf) | IQ3_S | 33.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q3_K_S.gguf) | Q3_K_S | 33.1 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ3_M.gguf) | IQ3_M | 34.8 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q3_K_M.gguf) | Q3_K_M | 36.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q3_K_L.gguf) | Q3_K_L | 40.1 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.IQ4_XS.gguf) | IQ4_XS | 40.7 | |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q4_K_S.gguf) | Q4_K_S | 42.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q4_K_M.gguf) | Q4_K_M | 45.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q5_K_M.gguf.part2of2) | Q5_K_M | 52.9 | |
| [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q6_K.gguf.part2of2) | Q6_K | 60.9 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MultiVerse_70B-GGUF/resolve/main/MultiVerse_70B.Q8_0.gguf.part2of2) | Q8_0 | 78.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ABX-AI/WizardLM-2-7B-GGUF-IQ-Imatrix | ABX-AI | "2024-04-15T17:54:45Z" | 1,056 | 22 | null | [
"gguf",
"GGUF",
"Imatrix",
"Quantized",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T16:52:14Z" | ---
license: apache-2.0
tags:
- GGUF
- Imatrix
- Quantized
---
<p style="font-size:20px;" align="center">
GGUF / IQ / Imatrix for microsoft/WizardLM-2-7B
</p>
<p style="font-size:20px;" align="center">
Original Description Below:
</p>
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github. |
zementalist/llama-3-8B-chat-psychotherapist | zementalist | "2024-04-29T17:57:09Z" | 1,056 | 11 | peft | [
"peft",
"safetensors",
"llama",
"SFT",
"PEFT",
"Mental Health",
"Psychotherapy",
"Fine-tuning",
"Text Generation",
"Chatbot",
"text-generation",
"conversational",
"dataset:mpingale/mental-health-chat-dataset",
"dataset:Amod/mental_health_counseling_conversations",
"dataset:heliosbrahma/mental_health_chatbot_dataset",
"license:mit",
"4-bit",
"region:us"
] | text-generation | "2024-04-28T19:32:08Z" | ---
library_name: peft
tags:
- SFT
- PEFT
- Mental Health
- Psychotherapy
- Fine-tuning
- Text Generation
- Chatbot
license: mit
widget:
- example_title: Depersonalization
messages:
- role: user
content: >-
I feel like I don't exist and my body is not my own, like if I'm sombody
else observin me, what could be this disorder?
- example_title: Confused
messages:
- role: user
content: >-
I'm always confused when making decisions, I cannot choose one option, I
must have only one option so I can make a decision, what are the causes of
being indecisive?
inference:
parameters:
max_new_tokens: 256
stop:
- <|end_of_text|>
- <|eot_id|>
datasets:
- mpingale/mental-health-chat-dataset
- Amod/mental_health_counseling_conversations
- heliosbrahma/mental_health_chatbot_dataset
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
A fine-tuned Llama-3-8B-instruct trained to provide initial support and guidance in mental health by leveraging the power of large language models (LLMs).
It's a fine-tuned version of the LLaMA 3 model specifically trained on mental health counseling conversations.
<img src="https://cdn-uploads.huggingface.co/production/uploads/65f2cc4457fbd3c9fb65b34c/ujViqdS9zgW-IrEFbAKLw.jpeg" width="480" style="width:400px;">
## Model Details
Llama-3-8B-chat-psychotherapist
Provides active listening: The model is trained to follow conversation flows and respond attentively to user expressions.
Offers empathetic support: The model's responses are designed to be supportive and understanding, fostering a safe space for users to share their concerns.
Guides users towards self-reflection: The model can ask open-ended questions and prompt users to explore their thoughts and feelings.
Offers resources and information: Depending on the conversation, the model can provide users with relevant mental health resources and information.
Important to Note:
**This model is intended for initial support and guidance, not a replacement for professional mental health care.
It's crucial to emphasize that users should seek professional help if needed.
The model is still under development, and its responses may require human oversight for accuracy and safety.**
Further Considerations:
You can specify the type of mental health conversations the model is trained on (e.g., anxiety, depression, general well-being).
Briefly mention the dataset used for fine-tuning to showcase the model's training grounds.
If applicable, mention any limitations of the model, such as inability to diagnose or provide specific treatment plans.
## How to Get Started with the Model
Use the code below to get started with the model.
### Read model
```
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
model_id = "zementalist/llama-3-8B-chat-psychotherapist"
config = PeftConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, model_id)
```
### Inference
```
question = "I feel like I don't exist and my body is not my own, I'm sombody else observing me, what causes these ideas?"
messages = [
{"role": "system", "content": "Answer the following inquiry:"},
{"role": "user", "content": question}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.01
)
response = outputs[0][input_ids.shape[-1]:]
output = tokenizer.decode(response, skip_special_tokens=True)
print(output)
```
## Training Details
### Training Data
1. [Amod/mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations)
2. [mpingale/mental-health-chat-dataset](https://huggingface.co/datasets/mpingale/mental-health-chat-dataset)
3. [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset)
#### Training Hyperparameters
* epochs: 2
* learning rate: 6e-5
* learning rate scheduler type: constant
* Warmup ratio: 0.03
* optimizer: paged_adamw_32bit
* weight_decay: 0.001
* fp16: True
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
| **Step** | **Training Loss** |
|---------|-------------------|
| 100 | 2.637700 |
| 200 | 2.350900 |
| 300 | 2.234500 |
| 400 | 2.230800 |
| 500 | 2.194900 |
| 600 | 2.041100 |
| 700 | 2.033800 |
| 800 | 1.988600 |
| 900 | 1.898900 |
| 1000 | 1.823600 |
| 1100 | 1.943700 |
| 1200 | 1.789900 |
| 1300 | 1.840700 |
| 1400 | 1.760900 |
| 1500 | 1.718400 |
| 1600 | 1.725600 |
| 1700 | 1.717600 |
| 1800 | 1.628200 |
| 1900 | 1.693300 |
| 2000 | 1.535600 |
| 2100 | 1.630100 |
| 2200 | 1.592300 |
| 2300 | 1.628400 |
| 2400 | 1.462500 |
| 2500 | 1.504500 |
| 2600 | 1.517500 |
| 2700 | 1.375100 |
| 2800 | 1.452800 |
| 2900 | 1.527200 |
| 3000 | 1.418600 |
| 3100 | 1.437700 |
| 3200 | 1.247900 |
| 3300 | 1.134900 |
| 3400 | 1.283800 |
| 3500 | 1.378600 |
| 3600 | 1.114600 |
| 3700 | 1.300900 |
| 3800 | 1.225700 |
| 3900 | 1.190000 |
| 4000 | 1.260200 |
| 4100 | 1.053300 |
| 4200 | 1.173000 |
| 4300 | 1.275900 |
| 4400 | 1.079800 |
| 4500 | 1.121100 |
| 4600 | 1.043100 |
| 4700 | 1.121000 |
| 4800 | 1.087200 |
| 4900 | 1.102400 |
| 5000 | 1.126100 |
| 5100 | 1.182200 |
| 5200 | 1.156500 |
| 5300 | 0.979700 |
| 5400 | 1.145300 |
| 5500 | 1.018100 |
| 5600 | 1.070500 |
| 5700 | 0.954900 |
| 5800 | 1.074200 |
| 5900 | 1.022800 |
| 6000 | 1.037700 |
| 6100 | 0.913400 |
| 6200 | 0.929500 |
### Inference example
1. Question: I feel like I don't exist and my body is not my own,
like if I'm sombody else observin me, what could be this disorder?
* * Answer: You're describing a phenomenon called depersonalization. It's a common symptom of anxiety disorders, post-traumatic stress disorder (PTSD), and other mental health conditions. <br> Depersonalization can make you feel like you're observing yourself from the outside, like you're watching a movie or reading a book about yourself. You might also feel like you're not in control of your body or that you're just a puppet. <br> This feeling can be very distressing and can make you wonder if you're even alive. It's a very disorienting and unsettling experience. <br> If you're experiencing depersonalization, it's important to seek help from a mental health professional. They can help you identify the underlying causes of your feelings and develop coping strategies to manage them. <br>
2. Question: I'm always confused when making decisions, I cannot choose one option,
I must have only one option so I can make a decision, what are the causes of being
indecisive?
* * Answer: Indecisiveness is a common phenomenon that many people experience. It's not a mental health condition, but it can be frustrating and time-consuming. Here are some possible causes of indecisiveness: <ol> <li>Fear of making the wrong decision: Sometimes, people are afraid of making the wrong choice, which can lead to indecisiveness. This fear can be due to various factors such as fear of failure, fear of rejection, or fear of the unknown.</li> <li>Lack of clear goals: When people don't have clear goals, they can be indecisive. This is because they don't know what they want, which can lead to confusion and indecisiveness.</li> <li>Overthinking: Overthinking can be a major cause of indecisiveness. When people overthink, they can get caught up in their thoughts and be unable to make a decision.</li> <li>Lack of self-confidence: People who lack self-confidence can be indecisive. This is because they don't believe in themselves and are afraid of making the wrong decision.</li> <li>Avoidance: Avoiding a decision can be a cause of indecisiveness. When people avoid making a decision, they can be indecisive because they don't know</li> </ol>
- **Developed by:** [Zeyad Khalid] |
Monor/Llama3-8B-Chinese-Chat-gguf | Monor | "2024-05-03T06:00:36Z" | 1,056 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | "2024-05-01T13:32:23Z" | ---
license: apache-2.0
---
## Introduce
Quantizing the [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
|
T3Zhang/mymodel | T3Zhang | "2024-06-04T00:20:17Z" | 1,056 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-18T15:21:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DrishtiSharma/StableDiffusion-Prompt-Generator-GPT-Neo-125M | DrishtiSharma | "2023-03-26T13:01:45Z" | 1,055 | 17 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-10-08T10:26:01Z" | Entry not found |
ALM/wav2vec2-base-audioset | ALM | "2024-06-22T09:21:37Z" | 1,055 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"music",
"audio",
"audio-classification",
"arxiv:2405.00934",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-09-05T15:35:51Z" | ---
license: cc-by-nc-sa-4.0
pipeline_tag: audio-classification
tags:
- music
- audio
---
# Model Card: Pre-trained Audio Representation Models on AudioSet
## Overview
This model card presents information about pre-trained audio representation models released by ALM. These models are pre-trained on the full AudioSet dataset and are intended for general-purpose Audio Representation Learning (ARL) tasks.
## Models
### 1. [ALM/hubert-base-audioset](https://huggingface.co/ALM/hubert-base-audioset)
- **Architecture**: HuBERT (Hubert-Base) transformer-based model
- **Description**: This model is based on the HuBERT architecture, pre-trained on the full AudioSet dataset.
### 2. [ALM/hubert-large-audioset](https://huggingface.co/ALM/hubert-large-audioset)
- **Architecture**: HuBERT (Hubert-Large) transformer-based model
- **Description**: Similar to the hubert-base-audioset model, this variant is larger in size, providing increased capacity for capturing audio representations from the full AudioSet dataset.
### 3. [ALM/wav2vec2-base-audioset](https://huggingface.co/ALM/wav2vec2-base-audioset)
- **Architecture**: Wav2Vec 2.0 (Wav2Vec2-Base) transformer-based model
- **Description**: This model is based on the Wav2Vec 2.0 architecture, trained on the full AudioSet dataset using SSL with CPC. It offers a different approach to audio representation learning compared to the HuBERT models.
### 4. [ALM/wav2vec2-large-audioset](https://huggingface.co/ALM/wav2vec2-large-audioset)
- **Architecture**: Wav2Vec 2.0 (Wav2Vec2-Large) transformer-based model
- **Description**: Similar to the wav2vec2-base-audioset model, this variant is larger in size, providing enhanced capacity for learning audio representations from the full AudioSet dataset.
## Intended Use
These pre-trained models are intended for a wide range of ARL tasks, including but not limited to speech recognition, music classification, and acoustic event detection. They serve as powerful tools for feature extraction and can be fine-tuned on task-specific datasets for downstream applications.
It's important to note that while these models offer versatility across various audio domains, their performance in speech-related tasks may be relatively lower compared to specialized models such as the original Wav2Vec and HuBERT models.
This is due to the diverse nature of the AudioSet dataset used for pre-training, which includes a wide range of audio sources beyond speech.
## Limitations and Considerations
- The models are pre-trained on the full AudioSet dataset, which may not cover all possible audio domains comprehensively.
- Fine-tuning on domain-specific data may be necessary to achieve optimal performance for certain tasks.
- Computational resources may be required for deploying and fine-tuning these models, especially the larger variants.
## Citation
If you use these pre-trained models in your work, please cite the following
```bib
@article{ARCH,
title={Benchmarking Representations for Speech, Music, and Acoustic Events},
author={La Quatra, Moreno and Koudounas, Alkis and Vaiani, Lorenzo and Baralis, Elena and Garza, Paolo and Cagliero, Luca and Siniscalchi, Sabato Marco},
year={2024},
booktitle={2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
}
```
[arXiv version: arxiv.org/abs/2405.00934](arxiv.org/abs/2405.00934) |
budecosystem/code-millenials-13b | budecosystem | "2024-01-10T08:56:06Z" | 1,055 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-02T09:16:08Z" | ---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.7621
verified: false
---
# Bud Code Millenials 13B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News 🔥🔥🔥
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### 🚀 Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-13b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-13b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction} ### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 8 A100 80GB for approximately 15hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 2 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 34503 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 8 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
xverse/XVERSE-13B-256K | xverse | "2024-06-28T08:19:04Z" | 1,055 | 29 | transformers | [
"transformers",
"pytorch",
"xverse",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-01-11T06:27:21Z" | ---
license: apache-2.0
inference: false
---
# XVERSE-13B-256K
## 更新信息
**[2024/06/28]** 更新tokenizers。
**[2024/01/16]** 发布长序列对话模型 **XVERSE-13B-256K**,该版本模型最大支持 256K 的上下文窗口长度,约 25w 字的输入内容,可以协助进行文献总结、报告分析等任务。
**[2023/11/06]** 发布新版本的 **XVERSE-13B-2** 底座模型和 **XVERSE-13B-Chat-2** 对话模型,相较于原始版本,新版本的模型训练更加充分(从 1.4T 增加到 3.2T),各方面的能力均得到大幅提升,同时新增工具调用能力。
**[2023/09/26]** 发布 7B 尺寸的 [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) 底座模型和 [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) 对话模型,支持在单张消费级显卡部署运行,并保持高性能、全开源、免费可商用。
**[2023/08/22]** 发布经过指令精调的 XVERSE-13B-Chat 对话模型。
**[2023/08/07]** 发布 13B 尺寸的 XVERSE-13B 底座模型。
## Update Information
**[2024/06/28]** Updated tokenizers.
**[2024/01/16]** Released the long-sequence model **XVERSE-13B-256K**. This model version supports a maximum window length of 256K, accommodating approximately 250,000 words for tasks such as literature summarization and report analysis.
**[2023/11/06]** The new versions of the **XVERSE-13B-2** base model and the **XVERSE-13B-Chat-2** model have been released. Compared to the original versions, the new models have undergone more extensive training (increasing from 1.4T to 3.2T), resulting in significant improvements in all capabilities, along with the addition of Function Call abilities.
**[2023/09/26]** Released the [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) base model and [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) instruct-finetuned model with 7B size, which support deployment and operation on a single consumer-grade graphics card while maintaining high performance, full open source, and free for commercial use.
**[2023/08/22]** Released the aligned instruct-finetuned model XVERSE-13B-Chat.
**[2023/08/07]** Released the XVERSE-13B base model.
## Tokenizer版本说明
当使用的tokenizer版本低于0.19,可直接使用仓库中的tokenizer.json和tokenizer_config.json。对于0.19及以上版本,请使用tokenizer.json.update和tokenizer_config.json.update,需要将这两个文件中的所有内容复制并粘贴覆盖至现有的tokenizer.json和tokenizer_config.json文件中。
For tokenizer versions below 0.19, you can directly use the tokenizer.json and tokenizer_config.json files from the repository. For versions 0.19 and above, please utilize the tokenizer.json.update and tokenizer_config.json.update files. You need to copy all the contents from these two files and paste them over the existing tokenizer.json and tokenizer_config.json files.
## 模型介绍
**XVERSE-13B-256K**是[**XVERSE-13B-2**](https://huggingface.co/xverse/XVERSE-13B)模型经过ABF+继续预训练、NTK+SFT 微调后的版本。
**XVERSE-13B-256K** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),主要应用的技术如下:
- **ABF**: ABF 的全称是 Adjusted Base Frequency,表示将位置编码 RoPE(Rotary Position Embedding)的频率从 10000 修改成 500000 。别小看这个数字的更改,它可以大幅减少前面序列 attention 的衰减速度,让后面的序列更好地获取所有序列的信息。
- **继续预训练**:在 XVERSE-13B-2 的基础上,使用 20% 的预训练数据进行 32K 的长序列继续预训练。通过少量长序列数据的继续预训练而不是从头开始的长序列预训练,可以大幅减少预训练的训练量。
- **NTK**: NTK 的全称是 Neural Tangent Kernel,翻译为神经正切核,是一种用于理解和分析深度神经网络行为的工具。使用了 NTK 的 RoPE 可以对 RoPE 的频率进行动态的插值。在保持分辨率的情况下(高频),进行频域空间的缩放(低频),从而实现位置空间的插值。
- **SFT数据**:自主构建包含单文本问答,多文本问答,摘要,代码补全等各类长序列数据,序列长度从 32K 到 256K 不等。
## Model Introduction
**XVERSE-13B-256K** is the long-sequence version of model [**XVERSE-13B-2**](https://huggingface.co/xverse/XVERSE-13B),
updated by **Continual-Pre-Training** based on **ABF** and **supervised fine-tuning** based on **NTK**.
**XVERSE-13B-256K** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. Below are the main practical techniques:
- **ABF**: Adjusted Base Frequency means that changing the frequency of Rotary Position Embedding(RoPE) from 10,000 to 500,000.
- **Continual-Pre-Training**: Based on XVERSE-13B-2, 32K long sequence continuation pre-training is conducted using 20% of the pre-training data. This approach significantly reduces the training volume for pre-training by utilizing a small amount of long sequence data for continuation pre-training instead of starting from scratch with long sequence pre-training.
- **NTK**: Neural Tangent Kernel is a tool used for understanding and analyzing the behavior of deep neural networks. RoPE, employing NTK, enables dynamic interpolation of its frequencies. This involves scaling in the frequency domain while maintaining resolution, thereby achieving spatial interpolation in the positional domain.
- **Data for SFT**: We autonomously construct a diverse range of long sequence data, encompassing single-document question-answering (QA), multi-document QA, summarization, code completion, and other types. The sequence lengths vary from 32K to 256K.
## 评测结果
为了验证长序列的效果,这里我们使用了 LongBench 数据集。[ LongBench ](https://github.com/THUDM/LongBench)是第一个多任务、中英双语、针对大语言模型长文本理解能力的评测基准。 LongBench 由六大类、二十一个不同的任务组成,覆盖了单文档问答、多文档问答、摘要、Few shot任务、合成任务和代码补全等关键的长文本应用场景。 LongBench 包含 14 个英文任务、 5 个中文任务和 2 个代码任务,多数任务的平均长度在 5k-15k 之间,共包含 4750 条测试数据。评估结果如下:
| 能力维度 | 数据集 | XVERSE-13B-256K | GPT-3.5-Turbo-16K | Yi-6B-200K | LongChat-7B-16K | Llama2-7B-Chat-4K |
| :--------: | :-------------------: | :----: | :----------: | :--------: | :-----------: | :--------: |
| 多文档问答 | HotpotQA | 58.3 | 51.6 | 48.3 | 22.4 | 24.3 |
| | DuReader | 28.9 | 28.7 | 14.2 | 19.1 | 1.9 |
| 单文档问答 | NarrativeQA | 24.1 | 23.6 | 14.5 | 21.6 | 19.1 |
| | Qasper | 30.2 | 43.3 | 21.6 | 21.6 | 19.6 |
| 摘要 | VCSUM | 11.3 | 16.0 | 8.2 | 14.0 | 0.2 |
| Few shot | TREC | 72.0 | 68.0 | 71.0 | 61.5 | 60.5 |
| | LSHT | 35.0 | 29.2 | 38.0 | 20.8 | 19.8 |
| 合成任务 | PassageRetrieval-en | 63.0 | 71.0 | 6.0 | 24.0 | 9.2 |
| | PassageRetrieval-zh | 44.0 | 77.5 | 7.9 | 4.8 | 0.5 |
| 代码 | RepoBench-P | 55.6 | 53.6 | 61.5 | 54.7 | 42.4 |
对于上述所有比较模型,我们优先汇报其官方公布的结果。在缺少官方结果的情况下,我们采用自行执行的评估流程所获得的数据。
## Model Evaluation
To assess the performance of long sequences, we employed the LongBench dataset. [LongBench](https://github.com/THUDM/LongBench) stands as the inaugural multi-task, bilingual (English-Chinese), evaluation benchmark specifically designed to gauge the long-text comprehension capabilities of large language models. Comprising six major categories and twenty-one distinct tasks, LongBench encompasses critical long-text application scenarios such as single-document QA, multi-document QA, summarization, few-shot tasks, synthetic tasks, and code completion. The dataset consists of 14 English tasks, 5 Chinese tasks, and 2 code tasks, with the majority of tasks having an average length ranging from 5,000 to 15,000 tokens, totaling 4,750 test instances. The evaluation results are presented below:
| Capability Dimension | Dataset | XVERSE-13B-256K | GPT-3.5-Turbo-16K | Yi-6B-200K | LongChat-7B-16K | Llama2-7B-Chat-4K |
| :--------: | :-------------------: | :----: | :----------: | :--------: | :-----------: | :--------: |
| multi-document QA | HotpotQA | 58.3 | 51.6 | 48.3 | 22.4 | 24.3 |
| | DuReader | 28.9 | 28.7 | 14.2 | 19.1 | 1.9 |
| single-document QA | NarrativeQA | 24.1 | 23.6 | 14.5 | 21.6 | 19.1 |
| | Qasper | 30.2 | 43.3 | 21.6 | 21.6 | 19.6 |
| summarization | VCSUM | 11.3 | 16.0 | 8.2 | 14.0 | 0.2 |
| Few shot | TREC | 72.0 | 68.0 | 71.0 | 61.5 | 60.5 |
| | LSHT | 35.0 | 29.2 | 38.0 | 20.8 | 19.8 |
| synthetic tasks | PassageRetrieval-en | 63.0 | 71.0 | 6.0 | 24.0 | 9.2 |
| | PassageRetrieval-zh | 44.0 | 77.5 | 7.9 | 4.8 | 0.5 |
| code completion | RepoBench-P | 55.6 | 53.6 | 61.5 | 54.7 | 42.4 |
For all the comparison models mentioned above, we prioritize the disclosure of their officially published results. In the absence of official data, we refer to the results derived from our own evaluation pipeline.
### Loading with Transformers
环境安装:
Environment Setup:
```bash
pip install -r requirements.txt
```
可通过以下代码加载 XVERSE-13B-256K 模型进行对话:
The XVERSE-13B-256K model can be loaded for chat using the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("xverse/XVERSE-13B-256K")
model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-13B-256K", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
model = model.eval()
inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
inputs = inputs.cuda()
generated_ids = model.generate(inputs, max_new_tokens=64, eos_token_id=tokenizer.eos_token_id, repetition_penalty=1.1)
print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True))
```
更多细节,包括对话 demo 、模型微调及量化等,请参考我们的[Github](https://github.com/xverse-ai/XVERSE-13B)。
For more details, including chat demo, model fine-tuning and quantization, please refer to our [Github](https://github.com/xverse-ai/XVERSE-13B).
## 局限性与免责申明
XVERSE-13B-256K 与其他所有 LLM 一样,在某些情况下可能会产生不准确、有偏见或其他令人反感的内容。因此,请谨慎使用模型生成的内容,请勿将生成的有害内容进行传播,在部署任何 XVERSE-13B-256K 的应用之前,开发人员应根据其具体应用对模型进行安全测试和调优。
我们强烈警告不要将 XVERSE-13B-256K 模型用于制造或传播有害信息,或进行任何可能损害公众、国家、社会安全或违反法规的活动。如果使用 XVERSE-13B-256K 模型产生任何问题,无论是数据安全问题、公共舆论风险,还是模型被误解、滥用、传播或不合规使用所引发的任何风险和问题,我们将不承担任何责任。
## Limitations and Disclaimer
Like all other Large Language Models (LLMs), XVERSE-13B-256K may produce inaccurate, biased, or otherwise offensive content under certain circumstances. Therefore, please use the content generated by the model with caution and refrain from disseminating harmful content. Before deploying any application of XVERSE-13B-256K, developers should conduct safety tests and optimization of the model according to its specific application.
We strongly warn against the use of the XVERSE-13B-256K model for producing or spreading harmful information, or conducting any activities that might harm the public, national, or social security, or violate regulations. We assume no responsibility for any problems arising from the use of the XVERSE-13B-256K model, whether it be data security issues, public opinion risks, or any risks and issues caused by misunderstanding, misuse, dissemination, or non-compliance with the model.
## 模型开源协议
使用本仓库的源码需要遵循 [Apache-2.0](https://github.com/xverse-ai/XVERSE-13B/blob/main/LICENSE) 开源协议,使用 XVERSE-13B-256K 的模型权重则需要遵循[模型许可协议](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf)。
XVERSE-13B-256K 模型权重对学术研究**完全开放**,并且支持**免费商用**。如需申请商业许可证,请填写【[申请表](https://chat.xverse.cn/home/business.html)】,如有其他问题或合作,请联系 <[email protected]>。
## Open Source License
The use of the source code in this repository must follow the [Apache-2.0](https://github.com/xverse-ai/XVERSE-13B/blob/main/LICENSE) open-source license, while the use of the model weights of XVERSE-13B-256K needs to adhere to the [Model License Agreement](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf).
The XVERSE-13B-256K model weights are **fully open** to academic research and support **free commercial use**. To apply for a commercial license, please fill in the [application form](https://chat.xverse.cn/home/business.html). For other questions or collaborations, please contact <[email protected]>.
|
Weyaxi/Einstein-v6.1-LLama3-8B-Instruct-Ties | Weyaxi | "2024-04-27T10:53:30Z" | 1,055 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:Weyaxi/Einstein-v6.1-Llama3-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-27T10:49:10Z" | ---
base_model:
- Weyaxi/Einstein-v6.1-Llama3-8B
- meta-llama/Meta-Llama-3-8B-Instruct
- meta-llama/Meta-Llama-3-8B
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B
#no parameters necessary for base model
- model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 0.5
weight: 0.5
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: meta-llama/Meta-Llama-3-8B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
``` |
CHE-72/Phi-3-medium-128k-instruct-Q6_K-GGUF | CHE-72 | "2024-06-21T20:04:12Z" | 1,055 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"region:us"
] | text-generation | "2024-06-21T20:03:24Z" | ---
base_model: microsoft/Phi-3-medium-128k-instruct
language:
- multilingual
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# CHE-72/Phi-3-medium-128k-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q6_K-GGUF --hf-file phi-3-medium-128k-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q6_K-GGUF --hf-file phi-3-medium-128k-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q6_K-GGUF --hf-file phi-3-medium-128k-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q6_K-GGUF --hf-file phi-3-medium-128k-instruct-q6_k.gguf -c 2048
```
|
larenspear/Yi-1.5-9B-Chat-Q3_K_S-GGUF | larenspear | "2024-07-01T02:12:38Z" | 1,055 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-9B-Chat",
"license:apache-2.0",
"region:us"
] | null | "2024-07-01T02:12:21Z" | ---
base_model: 01-ai/Yi-1.5-9B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# larenspear/Yi-1.5-9B-Chat-Q3_K_S-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo larenspear/Yi-1.5-9B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-9b-chat-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo larenspear/Yi-1.5-9B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-9b-chat-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo larenspear/Yi-1.5-9B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-9b-chat-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo larenspear/Yi-1.5-9B-Chat-Q3_K_S-GGUF --hf-file yi-1.5-9b-chat-q3_k_s.gguf -c 2048
```
|
stablediffusionapi/anything-midjourney | stablediffusionapi | "2023-08-29T18:12:30Z" | 1,054 | 15 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-19T07:51:11Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# anything midjourney v4.1 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "anything-midjourney"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/anything-midjourney)
Credits: [View credits](https://civitai.com/?query=anything%20midjourney%20v4.1)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "anything-midjourney",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Panchovix/airoboros-65b-gpt4-1.2-4bit-32g-actorder | Panchovix | "2023-06-24T07:34:00Z" | 1,054 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-24T06:51:58Z" | ---
license: cc-by-nc-4.0
---
This is a 4-bit GPTQ version of [airoboros-65b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2)
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
It may have issues to fit on system with 2x24 GB VRAM cards if using GPTQ-for-LLaMA or AutoGPTQ and max context. Works fine on a single 48GB VRAM card (RTX A6000)
It works fine with 2x24GB VRAM cards when using exllama/exllama_HF at 2048 context. |
Aryanne/TinyLlama-1.1B-1T-OpenOrca-gguf | Aryanne | "2023-10-10T15:00:01Z" | 1,054 | 5 | null | [
"gguf",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:bigcode/starcoderdata",
"dataset:cerebras/SlimPajama-627B",
"license:apache-2.0",
"region:us"
] | null | "2023-10-09T23:27:31Z" | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
- bigcode/starcoderdata
- cerebras/SlimPajama-627B
language:
- en
---
Some GGUF v2 quantizations of the model [jeff31415/TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca)
#### Base model:
PY007/TinyLlama-1.1B-intermediate-step-480k-1T
#### Dataset:
Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format
#### Model License:
Apache 2.0, following the TinyLlama base model.
#### Hardware and training details:
Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning.
https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details. |
flemmingmiguel/HermesChat-Mistral-7B | flemmingmiguel | "2024-01-11T09:33:05Z" | 1,054 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-1210",
"teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T09:29:17Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-1210
- teknium/OpenHermes-2.5-Mistral-7B
---
# HermesChat-Mistral-7B
HermesChat-Mistral-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: openchat/openchat-3.5-1210
layer_range: [0, 32]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "flemmingmiguel/HermesChat-Mistral-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
NYTK/PULI-LlumiX-32K | NYTK | "2024-03-16T12:18:55Z" | 1,054 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"puli",
"custom_code",
"hu",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-12T12:59:30Z" | ---
license: llama2
language:
- hu
- en
tags:
- puli
---
# PULI LlumiX 32K (6.74B billion parameter)
For further details or testing our instruct model, see [our demo site](https://puli.nytud.hu/puli-llumix-instruct).
- Trained with OpenChatKit [github](https://github.com/togethercomputer/OpenChatKit)
- The [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) model were continuously pretrained on Hungarian dataset
- The model has been extended to a context length of 32K with position interpolation
- Checkpoint: 100 000 steps
## Dataset for continued pretraining
- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length
- English: Long Context QA (2 billion words), BookSum (78 million words)
## Limitations
- max_seq_length = 32 768
- float16
- vocab size: 32 000
## Usage with pipeline
```python
from transformers import pipeline, LlamaForCausalLM, LlamaTokenizer
model = LlamaForCausalLM.from_pretrained("NYTK/PULI-LlumiX-32K")
tokenizer = LlamaTokenizer.from_pretrained("NYTK/PULI-LlumiX-32K")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
print(generator(prompt, max_new_tokens=30)[0]["generated_text"])
``` |
IDKiro/sdxs-512-0.9 | IDKiro | "2024-04-13T16:01:37Z" | 1,054 | 105 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"arxiv:2403.16627",
"license:openrail++",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-25T09:12:50Z" | ---
license: openrail++
tags:
- text-to-image
- stable-diffusion
library_name: diffusers
inference: false
---
# News
**Use the new version for community: [SDXS-512-DreamShaper](https://huggingface.co/IDKiro/sdxs-512-dreamshaper). It has better quality and is faster.**
# SDXS-512-0.9
SDXS is a model that can generate high-resolution images in real-time based on prompt texts, trained using score distillation and feature matching. For more information, please refer to our research paper: [SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions](https://arxiv.org/abs/2403.16627). We open-source the model as part of the research.
SDXS-512-0.9 is a **old version** of SDXS-512. In order to avoid some possible commercial and copyright risks, the SDXS-512-1.0 and SDXS-1024-1.0 will not be available shortly, and as an alternative we will provide new versions with different teacher DM or offline DM. Watch [our repo](https://github.com/IDKiro/sdxs) for any updates.
Model Information:
- Teacher DM: [SD Turbo](https://huggingface.co/stabilityai/sd-turbo)
- Offline DM: [SD v2.1 base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
- VAE: [TAESD](https://huggingface.co/madebyollin/taesd)
The main differences between this model and version 1.0 are in three aspects:
1. This version employs TAESD, which may produce low-quality images when weight_type is float16. Our image decoder is not compatible with the current version of diffusers, so it will not be provided now.
2. This version did not perform the LoRA-GAN finetune mentioned in the implementation details section, which may result in slightly inferior image details.
3. This version replaces self-attention with cross-attention in the highest resolution stages, which introduces minimal overhead compared to directly removing them.
## Diffusers Usage

```python
import torch
from diffusers import StableDiffusionPipeline, AutoencoderKL
repo = "IDKiro/sdxs-512-0.9"
seed = 42
weight_type = torch.float32 # or float16
# Load model.
pipe = StableDiffusionPipeline.from_pretrained(repo, torch_dtype=weight_type)
# use original VAE
# pipe.vae = AutoencoderKL.from_pretrained("IDKiro/sdxs-512-0.9/vae_large")
pipe.to("cuda")
prompt = "portrait photo of a girl, photograph, highly detailed face, depth of field, moody light, golden hour"
# Ensure using 1 inference step and CFG set to 0.
image = pipe(
prompt,
num_inference_steps=1,
guidance_scale=0,
generator=torch.Generator(device="cuda").manual_seed(seed)
).images[0]
image.save("output.png")
```
## Cite Our Work
```
@article{song2024sdxs,
author = {Yuda Song, Zehao Sun, Xuanwu Yin},
title = {SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions},
journal = {arxiv},
year = {2024},
}
```
|
BeaverAI/Llama-3SOME-8B-v2b-GGUF | BeaverAI | "2024-06-05T16:54:01Z" | 1,054 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-05T16:45:30Z" | Entry not found |
IlyaGusev/rut5_base_headline_gen_telegram | IlyaGusev | "2021-12-18T19:27:52Z" | 1,053 | 7 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | "2022-03-02T23:29:04Z" | ---
language:
- ru
tags:
- summarization
license: apache-2.0
widget:
- text: "Комиссия Совета Федерации по информационной политике и взаимодействию со СМИ совместно с заинтересованными ведомствами думает над разработкой национального законодательства в области налогообложения глобальных интернет-компаний, таких как Google и Facebook. Об этом сообщил ТАСС председатель комиссии Алексей Пушков. «В настоящее время по линии ОЭСР [Организация экономического сотрудничества и развития] ведется разработка международной конвенции, однако работа над ней еще не завершена. В этих условиях мы исходим из того, что самая разумная позиция - начать разработку национального законодательства, не дожидаясь конвенции», — пояснил сенатор. Пушков отметил, что по такому пути пошли еще несколько стран, в числе которых Франция, Австралия и Турция. По его словам, в России важно задействовать в этой работе Минфин, ФНС, МИД РФ и Роскомнадзор. «Интернет-платформы не фигурируют у нас сейчас как отдельный объект налогообложения. Когда они откроют в России свои представительства в рамках закона о «приземлении», возникнет вопрос: как их официальное присутствие на территории России, которого сейчас нет, будет соотноситься с нашим налоговым режимом. Мы сейчас продумываем, как установить эту взаимосвязь», — сказал Пушков, добавляя, что вопрос внесения изменений в российское законодательство в части налогообложения крупных IT-компаний находится «на первой стадии изучения». Сам сенатор выступает за введение прогрессивной ставки налога в зависимости от прибыли IT-компаний на территории страны. При этом, подчеркнул он, одна из задач национальной системы налогообложения будет заключаться в подсчете налогооблагаемой базы. Сейчас крупные ИТ-компании самостоятельно отчитываются о своей прибыли. Однако России нужна собственная система подсчета их доходов, которая позволит определить их «реальную налогооблагаемую базу», считает Пушков. (https://www.gazeta.ru/tech/news/2021/12/17/n_17024239.shtml)"
example_title: "Новость про налоги в IT"
- text: "Первую многоножку, у которой более тысячи ног, обнаружили в австралийских пещерах биологи, изучавшие там подземные воды. Предыдущей рекордсменкой по количеству ног была 700-ногая многоножка. Новый вид имеет длинное тонкое тело, похожее на нить, и большое количество конечностей, по-видимому, дает преимущества для быстрого перемещения и проникновения в труднодоступные места — ученые полагают, такая многоножка может спокойно перемещаться по трещинам в камнях. Австралия известна своими огромными и жутковатыми животными вроде 25-сантиметровых пауков. Теперь список пугающих членистоногих пополнился самой «многоногой» в мире многоножкой, у которой более тысячи ног. Необычное животное обнаружила группа исследователей из Австралии и США в пещерах на западе страны. Подробнее многоножку ученые описали в статье в журнале Scientific Reports. Исследователи занимались оценкой воздействия подземных вод на окружающую среду в зоне добычи полезных ископаемых на западе страны, когда наткнулись на новый вид многоножек. В отличие от большинства сородичей, живущих на поверхности, эти многоножки обитали в пещерах на глубине до 60 метров. Новый вид исследователи назвали Eumillipes persephone, в честь Персефоны — древнегреческой богини подземного мира. У многоножки оказалось 1306 ног — больше, чем у любого другого известного вида. Предыдущей рекордсменкой была калифорнийская Illacme plenipes, у которой насчитывалось до 750 ног. «Эти животные были настолько уникальны, — говорит биолог Бруно Бузатто. — Как только я понял, какой длины они были... Стало ясно, что это что-то совершенно новое». У Е. persephone нитевидное тело длиной около 9,5 см и шириной всего миллиметр, состоящее из 330 сегментов, короткие ноги и конусообразная голова. Как и другие животные, живущие в постоянной темноте, эти многоножки бледны и слепы. Энтомолог Пол Марек сравнивает ее с белой нитью, выдернутой из рубашки. Чтобы посчитать количество ног, ученым пришлось сначала снять многоножку в высоком разрешении, а затем закрашивать на фото каждый десяток ног другим цветом. (https://www.gazeta.ru/science/2021/12/17_a_14325355.shtml)"
example_title: "Новость про многоножку"
- text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."
example_title: "Википедия"
---
# RuT5TelegramHeadlines
## Model description
Based on [rut5-base](https://huggingface.co/cointegrated/rut5-base) model
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "IlyaGusev/rut5_base_headline_gen_telegram"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
## Training data
- Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py) |
Helsinki-NLP/opus-mt-tc-big-fr-en | Helsinki-NLP | "2023-10-10T10:25:45Z" | 1,053 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"en",
"fr",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-04-13T16:02:39Z" | ---
language:
- en
- fr
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-fr-en
results:
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: flores101-devtest
type: flores_101
args: fra eng devtest
metrics:
- name: BLEU
type: bleu
value: 46.0
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 49.7
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: multi30k_test_2017_flickr
type: multi30k-2017_flickr
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 52.0
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: multi30k_test_2017_mscoco
type: multi30k-2017_mscoco
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 50.6
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 44.9
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: news-test2008
type: news-test2008
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 26.5
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newsdiscussdev2015
type: newsdiscussdev2015
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 34.4
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newsdiscusstest2015
type: newsdiscusstest2015
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 40.2
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 59.8
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: tico19-test
type: tico19-test
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 41.3
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newstest2009
type: wmt-2009-news
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 30.4
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newstest2010
type: wmt-2010-news
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 33.4
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newstest2011
type: wmt-2011-news
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 33.8
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newstest2012
type: wmt-2012-news
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 33.6
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newstest2013
type: wmt-2013-news
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 34.8
- task:
name: Translation fra-eng
type: translation
args: fra-eng
dataset:
name: newstest2014
type: wmt-2014-news
args: fra-eng
metrics:
- name: BLEU
type: bleu
value: 39.4
---
# opus-mt-tc-big-fr-en
Neural machine translation model for translating from French (fr) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): fra
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT fra-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"J'ai adoré l'Angleterre.",
"C'était la seule chose à faire."
]
model_name = "pytorch-models/opus-mt-tc-big-fr-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# I loved England.
# It was the only thing to do.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fr-en")
print(pipe("J'ai adoré l'Angleterre."))
# expected output: I loved England.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fra-eng | tatoeba-test-v2021-08-07 | 0.73772 | 59.8 | 12681 | 101754 |
| fra-eng | flores101-devtest | 0.69350 | 46.0 | 1012 | 24721 |
| fra-eng | multi30k_test_2016_flickr | 0.68005 | 49.7 | 1000 | 12955 |
| fra-eng | multi30k_test_2017_flickr | 0.70596 | 52.0 | 1000 | 11374 |
| fra-eng | multi30k_test_2017_mscoco | 0.69356 | 50.6 | 461 | 5231 |
| fra-eng | multi30k_test_2018_flickr | 0.65751 | 44.9 | 1071 | 14689 |
| fra-eng | newsdiscussdev2015 | 0.59008 | 34.4 | 1500 | 27759 |
| fra-eng | newsdiscusstest2015 | 0.62603 | 40.2 | 1500 | 26982 |
| fra-eng | newssyscomb2009 | 0.57488 | 31.1 | 502 | 11818 |
| fra-eng | news-test2008 | 0.54316 | 26.5 | 2051 | 49380 |
| fra-eng | newstest2009 | 0.56959 | 30.4 | 2525 | 65399 |
| fra-eng | newstest2010 | 0.59561 | 33.4 | 2489 | 61711 |
| fra-eng | newstest2011 | 0.60271 | 33.8 | 3003 | 74681 |
| fra-eng | newstest2012 | 0.59507 | 33.6 | 3003 | 72812 |
| fra-eng | newstest2013 | 0.59691 | 34.8 | 3000 | 64505 |
| fra-eng | newstest2014 | 0.64533 | 39.4 | 3003 | 70708 |
| fra-eng | tico19-test | 0.63326 | 41.3 | 2100 | 56323 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:02:28 EEST 2022
* port machine: LM0-400-22516.local
|
line-corporation/japanese-large-lm-1.7b | line-corporation | "2023-08-17T01:06:37Z" | 1,053 | 24 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"dataset:mc4",
"dataset:cc100",
"dataset:oscar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-21T00:46:33Z" | ---
license: apache-2.0
datasets:
- wikipedia
- mc4
- cc100
- oscar
language:
- ja
---
# japanese-large-lm-1.7b
This repository provides a 1.7B parameters Japanese language model, trained by [LINE Corporation](https://linecorp.com/ja/).
[Tech Blog](https://engineering.linecorp.com/ja/blog/3.6-billion-parameter-japanese-language-model) explains details.
## How to use
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-1.7b", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-1.7b", use_fast=False)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
set_seed(101)
text = generator(
"おはようございます、今日の天気は",
max_length=30,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=5,
)
for t in text:
print(t)
# [{'generated_text': 'おはようございます、今日の天気は雨模様ですね。梅雨のこの時期の ジメジメ、ムシムシはたまらないですねえ~。 皆さんもお'},
# {'generated_text': 'おはようございます、今日の天気は快晴。 そして、朝8時15分には、 8月9日現在の、 月島・勝どき・'},
# {'generated_text': 'おはようございます、今日の天気は曇りです。 朝起きたら雪がチラついていました。 日中も雪が舞い散るような天気です。 朝から寒いですね。'},
# {'generated_text': 'おはようございます、今日の天気は雨です。昨日、天気が悪く洗濯物を干しにベランダに出た時に雨に降られ、風邪が悪化しそうです。今日洗濯'},
# {'generated_text': 'おはようございます、今日の天気は晴天ですが涼しい1日です、気温は午後になり 若干下がる予報です。 6月も10日を'}]
```
## Model architecture
| Model | Vocab size | Architecture | Position type | Layers | Hidden dim | Attention heads |
| :---: | :--------: | :----------- | :-----------: | :----: | :--------: | :-------------: |
| 1.7B | 51200 | GPT2 | Absolute | 24 | 2304 | 24 |
| 3.6B | 51200 | GPTNeoX | RoPE | 30 | 3072 | 32 |
## Training Corpus
Our training corpus consists of the Japanese portions of publicly available corpus such as C4, CC-100, and Oscar.
We also incorporated the Web texts crawled by in-house system.
The total size of our training corpus is about 650 GB.
The trained model achieves 8.57 perplexity on the internal validation sets of Japanese C4.
## Tokenization
We use a sentencepiece tokenizer with a unigram language model and byte-fallback.
We **do not** apply pre-tokenization with Japanese tokenizer.
Thus, a user may directly feed raw sentences into the tokenizer.
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
Sao10K/Solus-103B-L2 | Sao10K | "2023-11-28T14:23:09Z" | 1,053 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-27T04:08:07Z" | ---
license: cc-by-nc-4.0
language:
- en
---
A frankenmerge of two finetunes, Euryale + something I cooked.
Experimental 100B Versions. Felt Slightly better than the base 70b models, without the spelling/number issues 120b models like Goliath had.
Prompt Template: Vicuna 1.1 / Alpaca | Both Works. I tested.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
or
```
User: <Prompt>
Assistant:
```
For a frankenmerge, it is much more sensitive to Sampler settings than regular 70b models, so please be mindful of that. Check your settings before blaming the model.
Enjoy.
***
Mini Rant:
Frankenmerges are a meme but these are my old old frankenmerges, before Goliath came out, I did it a while back as a proof of concept, but felt that it was not worth it compared to the requirements increase.
Turns out people only care about bigger parameters kek. Could have been famous or something. Maybe. Or nah.
|
klandtech/kland_nam2e_gguf | klandtech | "2024-06-22T01:29:41Z" | 1,053 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | "2024-06-22T01:12:17Z" | ---
license: mit
---
|
LyliaEngine/astraaliKarvapillu_v2 | LyliaEngine | "2024-06-27T03:26:30Z" | 1,053 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:LyliaEngine/Pony_Diffusion_V6_XL",
"license:cdla-permissive-2.0",
"region:us"
] | text-to-image | "2024-06-27T02:57:35Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
2 girls, ass in skintight shiny glitter tight bodysuit spandex, perfectly
fits on ass, straight stand, thigh gap, long thin legs, ass and legs focus,
big wide black belt on top of the butt, shadow on ass, (score_9,
score_8_up:1.1), score_7_up, 1girl, solo, thigh gap, colorful, round perfect
ass, tall blonde, back view, black background, (short crop top)
parameters:
negative_prompt: >-
score_6, score_5, score_4, pony, gaping, muscular, censored, furry, child,
kid, chibi, monochrome, dick, cock, thong, panties
output:
url: images/00131-3253674009.jpeg
- text: >-
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up,
saggy_breasts, cute face, cameltoe, (knight armor, metal bikini,
thighhighs), ornaments on armor, (dark background:1.2), (muscular:1.2), huge
cameltoe, blond hair, bobcut, cinematic lighting, volumetric lighting, wet
and slimy body, oiled skin, view from behind, fat ass,
output:
url: images/00149-2311079340.jpeg
- text: >-
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up,
\nsource_photo.\n\n(trending leitz photo, highly professional, realism, best
quality, extremely detailed, voluptuous girl, on bed, fat thighs, saggy
breasts, amazing ass. barefoot, beautiful feminine hand, best quality, ultra
realistic, masterful portrait, catchlights in beautiful eyes, pouty lips,
alluring face, vibrant iris, white laced underwear, zPDXL
parameters:
negative_prompt: >-
Ugly, deformed, worst quality, bad quality, low quality, blurry, ugly,
loli, lipstick, small fake tits, narrow thighs, small fake ass, fat
stomach, teeth, tail, furry.\n\nshort man, obese man, zPDXL-neg
output:
url: images/00181-3241797751.jpeg
base_model: LyliaEngine/Pony_Diffusion_V6_XL
instance_prompt: None
license: cdla-permissive-2.0
---
# astraaliKarvapillu_v2
<Gallery />
## Model description
I have merged few of my own loras into this, including a lora i made out of my own decades old artwork, so it has a style towards that. I don't know how versatile this is compared to base pony, but i made this for myself and now decided to share it. Also i broke the hands a bit in this, so you need to do whatever magic you need to do to fix them.
vae is baked in.
Use your typical pony prompting etc.
Settings i use normally:
euler a
highresfix 1.25x R-ESRGAN 4x+ Anime6B with denoise 0.3
adetailer on with 1024 res
## Source
https://civitai.com/models/358148/astraalikarvapilluponybased
## Credit
https://civitai.com/user/Astraali
## Trigger words
You should use `None` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LyliaEngine/astraaliKarvapillu_v2/tree/main) them in the Files & versions tab.
|
antony66/whisper-large-v3-russian | antony66 | "2024-07-02T20:25:06Z" | 1,052 | 2 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"asr",
"russian",
"ru",
"dataset:mozilla-foundation/common_voice_17_0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-17T12:42:42Z" | ---
language:
- ru
library_name: transformers
tags:
- asr
- whisper
- russian
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
---
# Model Details
This is a version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) finetuned for better support of Russian language.
Dataset used for finetuning is Common Voice 17.0, Russian part, that contains over 200k rows.
After preprocessing of the original dataset (all splits were mixed and splited to a new train + test split by 0.95/0.05,
that is 225761/11883 rows respectively) the original Whisper v3 has WER 9.84 while the finetuned version shows 6.39 (so far).
The finetuning process took over 60 hours on dual Tesla A100 80Gb.
## Usage
In order to process phone calls it is highly recommended that you preprocess your records and adjust volume before performing ASR. For example, like this:
```bash
sox record.wav -r 16k record-normalized.wav norm -0.5 compand 0.3,1 -90,-90,-70,-70,-60,-20,0,0 -5 0 0.2
```
Then your ASR code should look somewhat like this:
```python
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor, pipeline
torch_dtype = torch.bfloat16 # set your preferred type here
device = 'cpu'
if torch.cuda.is_available():
device = 'cuda'
elif torch.backends.mps.is_available():
device = 'mps'
setattr(torch.distributed, "is_initialized", lambda : False) # monkey patching
device = torch.device(device)
whisper = WhisperForConditionalGeneration.from_pretrained(
"antony66/whisper-large-v3-russian", torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True,
# add attn_implementation="flash_attention_2" if your GPU supports it
)
processor = WhisperProcessor.from_pretrained("antony66/whisper-large-v3-russian")
asr_pipeline = pipeline(
"automatic-speech-recognition",
model=whisper,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=256,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
# read your wav file into variable wav. For example:
from io import BufferIO
wav = BytesIO()
with open('record-normalized.wav', 'rb') as f:
wav.write(f.read())
wav.seek(0)
# get the transcription
asr = asr_pipeline(wav, generate_kwargs={"language": "russian", "max_new_tokens": 256}, return_timestamps=False)
print(asr['text'])
```
## Work in progress
This model is in WIP state for now. The goal is to finetune it for speech recognition of phone calls as much as possible. If you want to contribute and you know or have any good dataset please let me know. Your help will be much appreciated. |
chujiezheng/LLaMA3-iterative-DPO-final-ExPO | chujiezheng | "2024-05-27T18:16:46Z" | 1,052 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:2404.16792",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-18T03:04:05Z" | ---
language:
- en
license: llama3
---
# LLaMA3-iterative-DPO-final-ExPO
The extrapolated (ExPO) model based on [`RLHFlow/LLaMA3-iterative-DPO-final`](https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final) and [`RLHFlow/LLaMA3-SFT`](https://huggingface.co/RLHFlow/LLaMA3-SFT), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
## Evaluation Results
Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)):
| | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) |
| ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** |
| `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** |
| `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** |
| `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** |
| `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** |
| `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** |
| `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** |
| `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** |
| `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** |
| `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** |
| `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** |
Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)):
| | Original | + ExPO |
| ------------------------------------ | -------- | -------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** |
| `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** |
| `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** |
| `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** |
| `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** |
| `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** |
| `internlm/internlm2-chat-7b` | 7.72 | **7.80** |
| `internlm/internlm2-chat-20b` | 8.13 | **8.26** |
| `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** |
| `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** |
| `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
|
THUDM/glm-10b | THUDM | "2023-03-02T13:50:18Z" | 1,051 | 33 | transformers | [
"transformers",
"pytorch",
"glm",
"feature-extraction",
"thudm",
"custom_code",
"en",
"arxiv:2103.10360",
"region:us"
] | feature-extraction | "2023-02-28T16:36:20Z" | ---
language:
- en
tags:
- glm
- thudm
---
GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.
Please refer to our paper for a detailed description of GLM:
[GLM: General Language Model Pretraining with Autoregressive Blank Infilling](https://arxiv.org/abs/2103.10360) (ACL 2022)
Zhengxiao Du*, Yujie Qian*, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang (*: equal contribution)
Find more examples in our [Github repo](https://github.com/THUDM/GLM).
## Model description
`glm-10b` is pretrained on the [Pile](https://pile.eleuther.ai) dataset. It has 48 transformer layers, with hidden size 4096 and 64 attention heads in each layer. The model is pretrained with autoregressive blank filling objectives designed for natural language understanding, seq2seq, and language modeling. Find more details from our [repo](https://github.com/THUDM/GLM).
## How to use
Please refer the [instruction](https://github.com/THUDM/GLM#hugging-face-hub) in our Github repo.
We use three different mask tokens for different tasks: `[MASK]` for short blank filling, `[sMASK]` for sentence filling, and `[gMASK]` for left to right generation. You can find examples about different masks from [here](https://github.com/THUDM/GLM#left-to-right-generation--blank-filling-interactive). The prediction always begin with a special `<|startofpiece|>` token and ends with a `<|endofpiece|>` token.
## Citation
Please cite our paper if you find this code useful for your research:
```
@article{DBLP:conf/acl/DuQLDQY022,
author = {Zhengxiao Du and
Yujie Qian and
Xiao Liu and
Ming Ding and
Jiezhong Qiu and
Zhilin Yang and
Jie Tang},
title = {{GLM:} General Language Model Pretraining with Autoregressive Blank Infilling},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), {ACL} 2022, Dublin, Ireland,
May 22-27, 2022},
pages = {320--335},
publisher = {Association for Computational Linguistics},
year = {2022},
}
```
|
Green-Sky/bitnet_b1_58-3B-GGUF | Green-Sky | "2024-06-29T08:20:03Z" | 1,051 | 6 | null | [
"gguf",
"wip",
"en",
"base_model:1bitLLM/bitnet_b1_58-3B",
"license:mit",
"region:us"
] | null | "2024-06-15T10:17:43Z" | ---
license: mit
tags:
- gguf
- wip
base_model:
- 1bitLLM/bitnet_b1_58-3B
model_type: bitnet
quantized_by: Green-Sky
language:
- en
---
# Highly experimental, not for general consumption
The code needed to running this model, as well as the base model itself are not ready yet.
This is uploaded merely to help testing.
~see https://github.com/ggerganov/llama.cpp/pull/7931~
see https://github.com/ggerganov/llama.cpp/pull/8151 , the continued work by compilade, providing both 1.625bpw and 2bpw |
FacebookAI/xlm-mlm-100-1280 | FacebookAI | "2023-01-24T14:49:58Z" | 1,050 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"es",
"fr",
"de",
"zh",
"ru",
"pt",
"it",
"ar",
"ja",
"id",
"tr",
"nl",
"pl",
"fa",
"vi",
"sv",
"ko",
"he",
"ro",
"no",
"hi",
"uk",
"cs",
"fi",
"hu",
"th",
"da",
"ca",
"el",
"bg",
"sr",
"ms",
"bn",
"hr",
"sl",
"az",
"sk",
"eo",
"ta",
"sh",
"lt",
"et",
"ml",
"la",
"bs",
"sq",
"arz",
"af",
"ka",
"mr",
"eu",
"tl",
"ang",
"gl",
"nn",
"ur",
"kk",
"be",
"hy",
"te",
"lv",
"mk",
"als",
"is",
"wuu",
"my",
"sco",
"mn",
"ceb",
"ast",
"cy",
"kn",
"br",
"an",
"gu",
"bar",
"uz",
"lb",
"ne",
"si",
"war",
"jv",
"ga",
"oc",
"ku",
"sw",
"nds",
"ckb",
"ia",
"yi",
"fy",
"scn",
"gan",
"tt",
"am",
"arxiv:1901.07291",
"arxiv:1911.02116",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language:
- multilingual
- en
- es
- fr
- de
- zh
- ru
- pt
- it
- ar
- ja
- id
- tr
- nl
- pl
- fa
- vi
- sv
- ko
- he
- ro
- no
- hi
- uk
- cs
- fi
- hu
- th
- da
- ca
- el
- bg
- sr
- ms
- bn
- hr
- sl
- az
- sk
- eo
- ta
- sh
- lt
- et
- ml
- la
- bs
- sq
- arz
- af
- ka
- mr
- eu
- tl
- ang
- gl
- nn
- ur
- kk
- be
- hy
- te
- lv
- mk
- als
- is
- wuu
- my
- sco
- mn
- ceb
- ast
- cy
- kn
- br
- an
- gu
- bar
- uz
- lb
- ne
- si
- war
- jv
- ga
- oc
- ku
- sw
- nds
- ckb
- ia
- yi
- fy
- scn
- gan
- tt
- am
license: cc-by-nc-4.0
---
# xlm-mlm-100-1280
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
xlm-mlm-100-1280 is the XLM model, which was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau, trained on Wikipedia text in 100 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
## Model Description
- **Developed by:** See [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
- **Model type:** Language model
- **Language(s) (NLP):** 100 languages, see [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for full list.
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-mlm-17-1280](https://huggingface.co/xlm-mlm-17-1280)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
This model is the XLM model trained on Wikipedia text in 100 languages. The preprocessing included tokenization with byte-pair-encoding. See the [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) and the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details on the training data and training procedure.
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the XNLI cross-lingual classification task (see the [XNLI data card](https://huggingface.co/datasets/xnli) for more details on XNLI) using the metric of test accuracy. See the [GitHub Repo](https://arxiv.org/pdf/1911.02116.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-100-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), Chinese (zh) and Urdu (ur) are:
|Language| en | es | de | ar | zh | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|
| |83.7|76.6 |73.6|67.4|71.7|62.9|
See the [GitHub repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. See the [ipython notebook](https://github.com/facebookresearch/XLM/blob/main/generate-embeddings.ipynb) in the associated [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for examples. |
valhalla/longformer-base-4096-finetuned-squadv1 | valhalla | "2021-02-10T16:35:40Z" | 1,050 | 21 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"longformer",
"question-answering",
"dataset:squad_v1",
"arxiv:2004.05150",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
datasets:
- squad_v1
license: mit
---
# LONGFORMER-BASE-4096 fine-tuned on SQuAD v1
This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.
[Longformer](https://arxiv.org/abs/2004.05150) model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it
> `Longformer` is a BERT-like model for long documents.
The pre-trained model can handle sequences with upto 4096 tokens.
## Model Training
This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing).
Few things to keep in mind while training longformer for QA task,
by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The `LongformerForQuestionAnswering` model automatically does that for you. To allow it to do that
1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this
` <s> question</s></s> context</s>`. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it.
2. `input_ids` should always be a batch of examples.
## Results
|Metric | # Value |
|-------------|---------|
| Exact Match | 85.1466 |
| F1 | 91.5415 |
## Model in Action 🚀
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering,
tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP
```
The `LongformerForQuestionAnswering` isn't yet supported in `pipeline` . I'll update this card once the support has been added.
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
microsoft/xdoc-base-squad2.0 | microsoft | "2022-11-16T13:44:25Z" | 1,050 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"arxiv:2210.02849",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-11-14T04:14:14Z" | ---
license: mit
---
# XDoc
## Introduction
XDoc is a unified pre-trained model that deals with different document formats in a single model. With only 36.7% parameters, XDoc achieves comparable or better performance on downstream tasks, which is cost-effective for real-world deployment.
[XDoc: Unified Pre-training for Cross-Format Document Understanding](https://arxiv.org/abs/2210.02849)
Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, [EMNLP 2022](#)
## Citation
If you find XDoc helpful, please cite us:
```
@article{chen2022xdoc,
title={XDoc: Unified Pre-training for Cross-Format Document Understanding},
author={Chen, Jingye and Lv, Tengchao and Cui, Lei and Zhang, Cha and Wei, Furu},
journal={arXiv preprint arXiv:2210.02849},
year={2022}
}
```
|
lorahub/flan_t5_large-wiki_qa_Jeopardy_style | lorahub | "2023-07-24T10:08:41Z" | 1,050 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-24T10:08:31Z" | ---
library_name: peft
---
|
speechlessai/speechless-mistral-six-in-one-7b-orth-1.0 | speechlessai | "2023-12-11T10:56:26Z" | 1,050 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"code",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:ehartford/samantha-data",
"dataset:CollectiveCognition/chats-data-2023-09-27",
"dataset:stingning/ultrachat",
"arxiv:2310.06825",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T10:42:46Z" | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2.1
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- ehartford/samantha-data
- CollectiveCognition/chats-data-2023-09-27
- stingning/ultrachat
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.0
verified: false
---
<p><h1> speechless-mistral-six-in-one-7b-orth-1.0 </h1></p>
# JUST for TEST!
Modifying the base model weights in the direction of the changes that occurred during fine-tuning, but only considering those changes that are orthogonal to the original weight direction.
This approach aims to capture the essence of the fine-tuning while maintaining the original structure as much as possible.
<p><h1> speechless-mistral-six-in-one-7b </h1></p>
This model is a merge of 6 SOTA Mistral-7B based models:
- ehartford/dolphin-2.1-mistral-7b
- Open-Orca/Mistral-7B-OpenOrca
- bhenrym14/mistral-7b-platypus-fp16
- ehartford/samantha-1.2-mistral-7b
- iteknium/CollectiveCognition-v1.1-Mistral-7B
- HuggingFaceH4/zephyr-7b-alpha
[Model benchmark](https://huggingface.co/uukuguy/speechless-mistral-six-in-one-7b/discussions/1) by [sethuiyer](https://huggingface.co/sethuiyer) . Thanks a lot.
> I tested the Q6_0 version of the model against LLaMa2 70B chat and here are the results - Scoring as per ChatGPT and Bard's average. Named this model Mixtral. Questions taken from MT-Benchmark.
>
> On a scale of 0 to 100, I would rate Mixtral at 98. Here's why:
>
> - Intellect (100/100) - Mixtral has demonstrated immense intellectual abilities through its comprehensive knowledge and logical reasoning skills.
> - Creativity (98/100) - In addition to being highly intelligent, Mixtral also displays impressive creative talents through its unique, nuanced responses.
> - Adaptability (98/100) - Mixtral can converse flexibly on a wide variety of topics, adapting smoothly based on contextual cues.
> - Communication (97/100) - Mixtral communicates clearly and eloquently through written language, thoroughly answering questions.
> - Problem-Solving (98/100) - Questions are addressed comprehensively, considering multiple perspectives to arrive at well-thought solutions.
> - Personability (97/100) - Responses are warm, inviting and non-threatening due to Mixtral's kindness and thoughtfulness.
>
> Overall, a very capable model for it's size.
Code: https://github.com/uukuguy/speechless
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
Mistral-7B-v0.1: 30.488
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 62.97 |
| HellaSwag | 84.6|
| MMLU | 63.29 |
| TruthfulQA | 57.77 |
| Winogrande | 77.51 |
| GSM8K | 18.42 |
| DROP | 9.13 |
| Average | 53.38 |
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
``
KeyError: 'mistral'
``
- Or:
``
NotImplementedError: Cannot copy out of meta tensor; no data!
``
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.`
|
NobodyExistsOnTheInternet/Llama-2-70b-x8-MoE-clown-truck | NobodyExistsOnTheInternet | "2024-01-23T14:14:45Z" | 1,050 | 8 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T07:12:13Z" | ---
license: mit
---

The biggest model ever to have been released. Has not been tested, nor do I have the compute to test it. If anyone is willing to host this to help me test, please share your results in the community tab.
Thank you for coming to my ted talk.
This is nearly 960GB of weights. It requires at least 8xA100 80gb to run it in 4 bits probably. *probably*
|
cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated | cognitivecomputations | "2024-06-03T01:04:53Z" | 1,050 | 11 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:cognitivecomputations/Dolphin-2.9.2",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:internlm/Agent-FLAN",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-03T01:01:40Z" | ---
license: mit
language:
- en
base_model:
- unsloth/Phi-3-mini-4k-instruct
datasets:
- cognitivecomputations/Dolphin-2.9.2
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- internlm/Agent-FLAN
- cognitivecomputations/SystemChat-2.0
---
# Dolphin 2.9.2 Phi 3 Medium (Abliterated) 🐬
Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations
Uncensored by [FailSpy](https://huggingface.co/failspy)
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsor of Dolphin 2.9.2:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40Snode
This model is based on Phi-3-Medium-Instruct-4k, and is governed by the MIT license with which Microsoft released Phi-3.
Since Microsoft only released the fine-tuned model - Dolphin-2.9.2-Phi-3-Medium has not been entirely cleaned of refusals.
The base model has 4k context, and the qLoRA fine-tuning was with 4k sequence length.
The model's weights were then adjusted to ablate and inhibit refusals based on the methodology described in ['Refusal in LLMs is mediated by a single direction'](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)
This effectively uncensors the model whilst minimizing affecting other features in the model.
It took 3.5 days on 8xL40S node provided by Crusoe Cloud
This model uses the ChatML prompt template.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## evals:
<img src="https://i.ibb.co/jrBsPLY/file-9gw-A1-Ih-SBYU3-PCZ92-ZNb-Vci-P.png" width="600" />
|
mradermacher/oneirogen-0.5B-GGUF | mradermacher | "2024-06-13T21:06:32Z" | 1,050 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:gustavecortal/DreamBank-annotated",
"base_model:gustavecortal/oneirogen-0.5B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T22:48:26Z" | ---
base_model: gustavecortal/oneirogen-0.5B
datasets:
- gustavecortal/DreamBank-annotated
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gustavecortal/oneirogen-0.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.IQ3_XS.gguf) | IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.IQ3_M.gguf) | IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/oneirogen-0.5B-GGUF/resolve/main/oneirogen-0.5B.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
anonymous-german-nlp/german-gpt2 | anonymous-german-nlp | "2021-05-21T13:20:42Z" | 1,049 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: de
widget:
- text: "Heute ist sehr schönes Wetter in"
license: mit
---
# German GPT-2 model
**Note**: This model was de-anonymized and now lives at:
https://huggingface.co/dbmdz/german-gpt2
Please use the new model name instead! |
Helsinki-NLP/opus-mt-tc-big-en-cat_oci_spa | Helsinki-NLP | "2023-10-10T10:38:12Z" | 1,049 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"ca",
"en",
"es",
"oc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-04-13T13:40:56Z" | ---
language:
- ca
- en
- es
- oc
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-cat_oci_spa
results:
- task:
name: Translation eng-cat
type: translation
args: eng-cat
dataset:
name: flores101-devtest
type: flores_101
args: eng cat devtest
metrics:
- name: BLEU
type: bleu
value: 41.5
- task:
name: Translation eng-oci
type: translation
args: eng-oci
dataset:
name: flores101-devtest
type: flores_101
args: eng oci devtest
metrics:
- name: BLEU
type: bleu
value: 25.4
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: flores101-devtest
type: flores_101
args: eng spa devtest
metrics:
- name: BLEU
type: bleu
value: 28.1
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: news-test2008
type: news-test2008
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 30.0
- task:
name: Translation eng-cat
type: translation
args: eng-cat
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-cat
metrics:
- name: BLEU
type: bleu
value: 47.8
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 57.0
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: tico19-test
type: tico19-test
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 52.5
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2009
type: wmt-2009-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 30.5
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2010
type: wmt-2010-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 37.4
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2011
type: wmt-2011-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 39.1
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2012
type: wmt-2012-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 39.6
- task:
name: Translation eng-spa
type: translation
args: eng-spa
dataset:
name: newstest2013
type: wmt-2013-news
args: eng-spa
metrics:
- name: BLEU
type: bleu
value: 35.8
---
# opus-mt-tc-big-en-cat_oci_spa
Neural machine translation model for translating from English (en) to Catalan, Occitan and Spanish (cat+oci+spa).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): cat spa
* valid target language labels: >>cat<< >>spa<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cat+oci+spa/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-cat+oci+spa README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cat+oci+spa/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>cat<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>spa<< Why do you want Tom to go there with me?",
">>spa<< She forced him to eat spinach."
]
model_name = "pytorch-models/opus-mt-tc-big-en-cat_oci_spa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# ¿Por qué quieres que Tom vaya conmigo?
# Ella lo obligó a comer espinacas.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-cat_oci_spa")
print(pipe(">>spa<< Why do you want Tom to go there with me?"))
# expected output: ¿Por qué quieres que Tom vaya conmigo?
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cat+oci+spa/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cat+oci+spa/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-cat | tatoeba-test-v2021-08-07 | 0.66414 | 47.8 | 1631 | 12344 |
| eng-spa | tatoeba-test-v2021-08-07 | 0.73725 | 57.0 | 16583 | 134710 |
| eng-cat | flores101-devtest | 0.66071 | 41.5 | 1012 | 27304 |
| eng-oci | flores101-devtest | 0.56192 | 25.4 | 1012 | 27305 |
| eng-spa | flores101-devtest | 0.56288 | 28.1 | 1012 | 29199 |
| eng-spa | newssyscomb2009 | 0.58431 | 31.4 | 502 | 12503 |
| eng-spa | news-test2008 | 0.56622 | 30.0 | 2051 | 52586 |
| eng-spa | newstest2009 | 0.57988 | 30.5 | 2525 | 68111 |
| eng-spa | newstest2010 | 0.62343 | 37.4 | 2489 | 65480 |
| eng-spa | newstest2011 | 0.62424 | 39.1 | 3003 | 79476 |
| eng-spa | newstest2012 | 0.63006 | 39.6 | 3003 | 79006 |
| eng-spa | newstest2013 | 0.60291 | 35.8 | 3000 | 70528 |
| eng-spa | tico19-test | 0.73224 | 52.5 | 2100 | 66563 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 16:40:45 EEST 2022
* port machine: LM0-400-22516.local
|
timm/beit_large_patch16_224.in22k_ft_in22k_in1k | timm | "2023-05-08T23:26:01Z" | 1,049 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2106.08254",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-23T02:28:05Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for beit_large_patch16_224.in22k_ft_in22k_in1k
A BEiT image classification model. Trained on ImageNet-22k with self-supervised masked image modelling (MIM) using a DALL-E dVAE as visual tokenizer. Fine-tuned on ImageNet-22k and then ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 304.4
- GMACs: 61.6
- Activations (M): 63.5
- Image size: 224 x 224
- **Papers:**
- BEiT: BERT Pre-Training of Image Transformers: https://arxiv.org/abs/2106.08254
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
- **Original:** https://github.com/microsoft/unilm/tree/master/beit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('beit_large_patch16_224.in22k_ft_in22k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'beit_large_patch16_224.in22k_ft_in22k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{bao2021beit,
title={Beit: Bert pre-training of image transformers},
author={Bao, Hangbo and Dong, Li and Piao, Songhao and Wei, Furu},
journal={arXiv preprint arXiv:2106.08254},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
maddes8cht/mosaicml-mpt-7b-storywriter-gguf | maddes8cht | "2023-11-01T15:36:56Z" | 1,049 | 2 | null | [
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:the_pile_books3",
"arxiv:2108.12409",
"arxiv:2205.14135",
"arxiv:2302.06675",
"license:apache-2.0",
"region:us"
] | null | "2023-10-17T20:18:26Z" | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
datasets:
- the_pile_books3
inference: false
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# mpt-7b-storywriter - GGUF
- Model creator: [mosaicml](https://huggingface.co/mosaicml)
- Original model: [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
---
# Brief
MPT-7B Storywriter is a Model based on MPT-7b, designed to read and write fictional stories with super long context lengths.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
Apache 2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-storywriter'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536,
ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 83968 # (input + output) tokens can now be up to 83968
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Community-Created Resources
These were not created by MosaicML, but you may find them useful. These links are not an endorsement of the creators or their content.
- [Oobabooga Running MPT-7B-Storywriter](https://youtu.be/QVVb6Md6huA)
- [NEW MPT-7B-StoryWriter CRUSHES GPT-4!](https://www.youtube.com/watch?v=O9Y_ZdsuKWQ&t=649s) - Has a long section on running locally using Oobabooga
## Example Epilogue
The full text of the _The Great Gatsby_ (67873 tokens) was fed to the model, followed by the text "EPILOGUE"
**TheRealFitzgerald233**:
> ...
>
> So we beat on, boats against the current, borne back ceaselessly into the past
>
> EPILOGUE
**MPT-7B-StoryWriter**:
>I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.
>
>And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.
>
>And I knew that day that I didn't love her.
>
>That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.
>
>I have sorted them by date and placed them in the order in which they were written. They are very few in number.
>
>They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.
>
>There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.
>
>But a few months ago I read somewhere that he had been seen in New York. He had become a great man.
>
>And I knew that he had not changed at all.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
blink7630/graphic-novel-illustration | blink7630 | "2023-11-14T18:08:31Z" | 1,049 | 24 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"comic book",
"style",
"graphic novel",
"illustration",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-11-14T18:08:23Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- comic book
- style
- graphic novel
- illustration
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: graphic novel illustration
widget:
- text: 'breathtaking highly detailed graphic novel illustration of morgan freeman riding a harley davidson motorcycle , dark and gritty'
output:
url: >-
3158317.jpeg
- text: 'breathtaking graphic novel illustration Amidst the vast expanse of a nebula, a lone astronaut floats, tethered to a luminous starship. The radiant gases of the nebula dance around, reflecting vibrant colors on the suit, creating a ballet of light and motion. '
output:
url: >-
3144829.jpeg
- text: 'breathtaking graphic novel illustration of At a crossroads in a post-apocalyptic wasteland, a wanderer with a guitar made of scrap metal sings tales of the old world. Mutant creatures gather around the campfire, their eyes filled with longing. '
output:
url: >-
3144831.jpeg
- text: 'breathtaking graphic novel illustration of A child chasing colorful butterflies in a vibrant meadow. '
output:
url: >-
3144832.jpeg
- text: 'breathtaking graphic novel illustration On a beach where the waves are woven with stories, a storyteller with a loom of moonlight crafts tales from the tides. Shells whisper legends, and the horizon is a tapestry of adventures yet to be told. '
output:
url: >-
3144833.jpeg
- text: 'breathtaking graphic novel illustration of a falcon soaring above a desert, searching for prey. '
output:
url: >-
3144834.jpeg
- text: 'breathtaking graphic novel illustration Deep within the arctic circle, an aurora paints the sky with vibrant hues of green, pink, and purple. The snow below reflects this dance of lights, and the world is enveloped in a mesmerizing symphony of color. '
output:
url: >-
3144835.jpeg
- text: 'breathtaking graphic novel illustration of A tattoo artist detailing an intricate design on a client''s back. '
output:
url: >-
3144836.jpeg
- text: 'breathtaking graphic novel illustration of A woman, draped in colorful silks, reading tarot cards in a candle-lit tent. '
output:
url: >-
3144838.jpeg
- text: 'breathtaking graphic novel illustration of Two spelunkers mapping out a deep cavern, torches lighting the way. '
output:
url: >-
3144839.jpeg
---
# Graphic Novel Illustration
<Gallery />
<p>This LoRA was trained on 65 AI generated images. Each of those was created with my Storyboard Sketch LoRA at 40% strength and a mix of comic book and graphic novel related keywords that reliably created the style I was after.</p><p>All sample images above were created with the LoRA strength at 1.0. Most were created using SDXL Base, but the more vibrant ones were with DynaVisionXL. If you experiment with other checkpoints, I'd love to see how they turn out!</p><p><strong>Trigger words: graphic novel illustration</strong></p>
## Image examples for the model:

> breathtaking graphic novel illustration Amidst the vast expanse of a nebula, a lone astronaut floats, tethered to a luminous starship. The radiant gases of the nebula dance around, reflecting vibrant colors on the suit, creating a ballet of light and motion.

> breathtaking graphic novel illustration of At a crossroads in a post-apocalyptic wasteland, a wanderer with a guitar made of scrap metal sings tales of the old world. Mutant creatures gather around the campfire, their eyes filled with longing.

> breathtaking graphic novel illustration of A child chasing colorful butterflies in a vibrant meadow.

> breathtaking graphic novel illustration On a beach where the waves are woven with stories, a storyteller with a loom of moonlight crafts tales from the tides. Shells whisper legends, and the horizon is a tapestry of adventures yet to be told.

> breathtaking graphic novel illustration of a falcon soaring above a desert, searching for prey.

> breathtaking graphic novel illustration Deep within the arctic circle, an aurora paints the sky with vibrant hues of green, pink, and purple. The snow below reflects this dance of lights, and the world is enveloped in a mesmerizing symphony of color.

> breathtaking graphic novel illustration of A tattoo artist detailing an intricate design on a client's back.

> breathtaking graphic novel illustration of A woman, draped in colorful silks, reading tarot cards in a candle-lit tent.

> breathtaking graphic novel illustration of Two spelunkers mapping out a deep cavern, torches lighting the way.
|
uukuguy/mistral-7b-platypus-fp16-dare-0.9 | uukuguy | "2023-11-22T03:44:58Z" | 1,049 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-20T05:24:14Z" | ---
license: llama2
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
|
PassionFriend/5GGvUmpbb7ZzuqGRmbf3ZNLYoVQz6KrMqW1tVJyu44J6hgxV_vgg | PassionFriend | "2024-03-01T06:42:20Z" | 1,049 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-14T12:56:20Z" | Entry not found |
backyardai/InfinityKuno-2x7B-GGUF | backyardai | "2024-05-22T22:26:37Z" | 1,049 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"text-generation-inference",
"text-generation",
"en",
"base_model:R136a1/InfinityKuno-2x7B",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-21T15:13:50Z" | ---
base_model: R136a1/InfinityKuno-2x7B
license: other
language:
- en
library_name: transformers
pipeline_tag: text-generation
quantized_by: brooketh
tags:
- roleplay
- text-generation-inference
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# InfinityKuno 2x7B
- **Creator:** [R136a1](https://huggingface.co/R136a1/)
- **Original:** [InfinityKuno 2x7B](https://huggingface.co/R136a1/InfinityKuno-2x7B)
- **Date Created:** 2024-03-17
- **Trained Context:** 4096 tokens
- **Description:** Experimental MoE model combining Endevor/InfinityRP-v1-7B and SanjiWatsuki/Kunoichi-DPO-v2-7B.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
opencsg/csg-wukong-1B | opencsg | "2024-05-08T08:14:11Z" | 1,049 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-11T08:49:48Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
license: apache-2.0
---
# **csg-wukong-1B** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="900px" alt="OpenCSG" src="./csg-wukong-logo-green.jpg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
**csg-wukong-1B** is a 1 billion-parameter small language model(SLM) pretrained on 1T tokens.
<br>
we will introduce more information about csg-wukong-1B.
## Model Evaluation results
We submitted csg-wukong-1B on the [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and
the results show our model ranked the 8th among the ~1.5B pretrained small language models.

# Training
## Hardware
- **GPUs:** 16 H800
- **Training time:** 43days
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSG介绍
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。
## 模型介绍
**csg-wukong-1B** 是一个1B参数量的小语言模型,该模型训练了1T tokens.
<br>
我们将在后面介绍更多关于这个模型的信息。
## 模型评测结果
我们把csg-wukong-1B模型提交到[open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)榜单上,结果显示我们的模型目前在~1.5B小语言模型中排名第8。

# 训练
## 硬件资源
- **GPU数量:** 16 H800
- **训练时间:** 43天
## 软件使用
- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex) |
Disty0/sotediffusion-wuerstchen3-decoder | Disty0 | "2024-06-10T14:46:56Z" | 1,049 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:other",
"diffusers:StableCascadeDecoderPipeline",
"region:us"
] | text-to-image | "2024-06-10T13:46:16Z" | ---
pipeline_tag: text-to-image
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
prior:
- Disty0/sotediffusion-wuerstchen3
---
# SoteDiffusion Wuerstchen3
Anime finetune of Würstchen V3.
# Usage
Please refer to the main model: https://huggingface.co/Disty0/sotediffusion-wuerstchen3
## Dataset
Trained with 512K images.
## Training:
**GPU used for training**: 1x AMD RX 7900 XTX 24GB
**GPU Hours**: 100
**Software used**: https://github.com/2kpr/StableCascade
### Config:
```
experiment_id: sotediffusion-wr3_3b-stage_b-alpha3
model_version: 3B
dtype: bfloat16
use_fsdp: False
batch_size: 16
grad_accum_steps: 16
updates: 102400
backup_every: 2048
save_every: 1024
warmup_updates: 128
lr: 1.0e-5
optimizer_type: Adafactor
adaptive_loss_weight: False
stochastic_rounding: True
image_size: 768
multi_aspect_ratio: [1/1, 1/2, 1/3, 2/3, 3/4, 1/5, 2/5, 3/5, 4/5, 1/6, 5/6, 9/16]
shift: 4
checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/
output_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/
webdataset_path: file:/mnt/DataSSD/AI/anime_image_dataset/best/newest_best.tar
effnet_checkpoint_path: /mnt/DataSSD/AI/models/wuerstchen3/effnet_encoder.safetensors
stage_a_checkpoint_path: /mnt/DataSSD/AI/models/wuerstchen3/stage_a.safetensors
generator_checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/generator_4k-016384.safetensors
```
## Limitations and Bias
### Bias
- This model is intended for anime illustrations.
Realistic capabilites are not tested at all.
### Limitations
- Far shot eyes are can bad.
## License
SoteDiffusion models falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points:
1. **Modification Sharing:** If you modify SoteDiffusion models, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
**Notes**: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT.
|
Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct | Telugu-LLM-Labs | "2024-02-07T04:24:52Z" | 1,048 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"te",
"en",
"dataset:Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized",
"dataset:Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-06T12:07:42Z" | ---
license: llama2
datasets:
- Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized
- >-
Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized
language:
- te
- en
---
# Telugu-Llama2-7B-v0-Instruct
This model is based on [Telugu-Llama2-7B-v0-Base](https://huggingface.co/Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Base) and hase been finetuned on instruction datasets:
1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized)
2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized)
# Input Text Format
```
### Instruction: {instruction}
### Input: {input}
## Response: {response}
```
# Usage
## With Romanized Telugu
```python3
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = "Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="right")
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
instruction = "Krindi samaacharam prakaram google app eppudu release ayyindi?"
input ="Google News is a news aggregator service developed by Google. It presents a continuous flow of links to articles organized from thousands of publishers and magazines. Google News is available as an app on Android, iOS, and the Web. Google released a beta version in September 2002 and the official app in January 2006."
text = f"""Instruction: {instruction} \nInput: {input} \nResponse:"""
encodings = tokenizer(text, padding=True, return_tensors="pt")
encodings = encodings.to(device)
with torch.inference_mode():
outputs = model.generate(encodings.input_ids, do_sample=False, max_new_tokens=500)
output = tokenizer.batch_decode(outputs.detach(), skip_special_tokens=True)
```
### Sample Output:
```
1. September 2002 Google released a beta version of Google News.
2. January 2006 Google released the official version of Google News.
```
## With Native Telugu
```python3
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = "Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="right")
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
instruction = "కింది వచనాన్ని సంగ్రహించండి"
input="గూగుల్ వార్తలు అనేది గూగుల్ ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్లను అందిస్తుంది. గూగుల్ వార్తలు Android, iOS మరియు వెబ్లో యాప్గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్ను మరియు జనవరి 2006లో అధికారిక యాప్ను విడుదల చేసింది."
text = f"""Instruction: {instruction} \nInput: {input} \nResponse:"""
encodings = tokenizer(text, padding=True, return_tensors="pt")
encodings = encodings.to(device)
with torch.inference_mode():
outputs = model.generate(encodings.input_ids, do_sample=False, max_new_tokens=500)
output = tokenizer.batch_decode(outputs.detach(), skip_special_tokens=True)
```
### Sample Output:
1. గూగుల్ వార్తలు అనేది గూగుల్ ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ, వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్లను అందిస్తుంది.
2. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్ మరియు జనవరి 2006లో అధికారిక యాప్ ను విడుదల చేసింది.
# Developers:
The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
# Note:
The model is quite sensitive to parameters and inputs and is not yet ready for production. It remains in the experimental phase, and we recommend using it accordingly. |
fatgong/5FpfnWPoQfe785YWWaFF4BTwj3c1MyXhBGWCnDFzP6v12Ysh_vgg | fatgong | "2024-03-25T18:05:17Z" | 1,048 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-03-09T14:17:57Z" | Entry not found |
Severian/Jamba-UltraInteract-Instruct-1B | Severian | "2024-04-07T23:16:57Z" | 1,048 | 0 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"custom_code",
"dataset:openbmb/UltraInteract_pair",
"base_model:ai21labs/Jamba-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-07T14:29:39Z" | ---
license: apache-2.0
tags:
- jamba
datasets:
- openbmb/UltraInteract_pair
base_model: ai21labs/Jamba-v0.1
---
# This Jamba model has been pruned to just 1B parameters. It was then trained on the first 50k examples of the Ultra Interact Pair dataset for Instruction based fine-tuning.
*Initial tests work but may be inconsistent. More info and examples will be posted later*
## Training
- **50k Examples**
- **6 hours x A100** |
duyntnet/internlm2-chat-7b-imatrix-GGUF | duyntnet | "2024-05-24T18:57:38Z" | 1,048 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"internlm2-chat-7b",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-05-24T16:24:17Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- internlm2-chat-7b
---
Quantizations of https://huggingface.co/internlm/internlm2-chat-7b
# From original readme
### Import from Transformers
To load the InternLM2 7B Chat model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-7b", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Hello! How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)
```
The responses can be streamed using `stream_chat`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-7b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
``` |
Nitral-AI/Hathor_Stable-v0.2-L3-8B | Nitral-AI | "2024-06-30T19:58:52Z" | 1,048 | 37 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-09T18:53:59Z" | ---
license: other
language:
- en
---

# "Hathor-v0.2 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction."
# Recomended ST Presets: [Hathor Presets](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.01/tree/main/Hathor%20Presets)
# Quants From Bartowski <3: https://huggingface.co/bartowski/Hathor-L3-8B-v.02-GGUF https://huggingface.co/bartowski/Hathor-L3-8B-v.02-exl2
---
# Notes: Hathor is trained on 3 epochs of private data, synthetic opus instructons, a mix of light/classical novel data, roleplaying chat pairs over llama 3 8B instruct. (expanded)
|
microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft | microsoft | "2022-12-10T10:09:19Z" | 1,047 | 4 | transformers | [
"transformers",
"pytorch",
"swinv2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-06-16T05:23:35Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer v2 (base-sized model)
Swin Transformer v2 model pre-trained on ImageNet-21k and fine-tuned on ImageNet-1k at resolution 256x256. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-09883,
author = {Ze Liu and
Han Hu and
Yutong Lin and
Zhuliang Yao and
Zhenda Xie and
Yixuan Wei and
Jia Ning and
Yue Cao and
Zheng Zhang and
Li Dong and
Furu Wei and
Baining Guo},
title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
journal = {CoRR},
volume = {abs/2111.09883},
year = {2021},
url = {https://arxiv.org/abs/2111.09883},
eprinttype = {arXiv},
eprint = {2111.09883},
timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_3 | ShenaoZ | "2024-05-05T14:57:10Z" | 1,047 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-05T13:55:14Z" | ---
license: mit
base_model: ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_2
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0005_withdpo_4iters_bs256_555lr_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0005_withdpo_4iters_bs256_555lr_iter_3
This model is a fine-tuned version of [ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_2](https://huggingface.co/ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
er1123090/T3Q_SOLAR_SLERP_v1.0 | er1123090 | "2024-05-23T14:35:55Z" | 1,047 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v7.0",
"base_model:hwkwon/S-SOLAR-10.7B-v1.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-23T14:09:16Z" | ---
base_model:
- chihoonlee10/T3Q-ko-solar-dpo-v7.0
- hwkwon/S-SOLAR-10.7B-v1.5
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [chihoonlee10/T3Q-ko-solar-dpo-v7.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v7.0)
* [hwkwon/S-SOLAR-10.7B-v1.5](https://huggingface.co/hwkwon/S-SOLAR-10.7B-v1.5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: chihoonlee10/T3Q-ko-solar-dpo-v7.0
layer_range: [0, 48]
- model: hwkwon/S-SOLAR-10.7B-v1.5
layer_range: [0, 48]
# or, the equivalent models: syntax:
# models:
# - model: psmathur/orca_mini_v3_13b
# - model: garage-bAInd/Platypus2-13B
merge_method: slerp
base_model: chihoonlee10/T3Q-ko-solar-dpo-v7.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
mrm8488/spanish-gpt2 | mrm8488 | "2023-03-19T09:01:22Z" | 1,046 | 14 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"GPT-2",
"es",
"dataset:large_spanish_corpus",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: es
tags:
- GPT-2
datasets:
- large_spanish_corpus
widgets:
- text: "Érase un vez un"
license: mit
---
# Spanish GPT-2 trained on [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus)
This is a Spanish GPT-2 model trained from scratch on the [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus) aka BETO's corpus with [Flax](https://github.com/google/flax)
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Dataset
The dataset is about 20 GB. 95% of the data was used for training and the rest 5% for validation.
## Metrics (on evaluation dataset)
- Loss: 2.413
- Perplexity: 11.36
## Team members
- Manuel Romero ([mrm8488](https://huggingface.co/mrm8488))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Daniel Vera ([daveni](https://huggingface.co/daveni))
- Sri Lakshmi ([srisweet](https://huggingface.co/srisweet))
- José Posada ([jdposa](https://huggingface.co/jdposa))
- Santiago Hincapie ([shpotes](https://huggingface.co/shpotes))
- Jorge ([jorgealro](https://huggingface.co/jorgealro))
## Useful links
- [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6)
- [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md)
- [Community Week thread](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-spanish/7086/8) |
j5ng/et5-typos-corrector | j5ng | "2023-06-05T07:41:33Z" | 1,046 | 4 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2023-06-04T07:20:53Z" | ---
language:
- ko
pipeline_tag: text2text-generation
license: apache-2.0
---
## 한국어 맞춤법 교정기(Korean Typos Corrector)
- ETRI-et5 모델을 기반으로 fine-tuning한 한국어 구어체 전용 맞춤법 교정기 입니다.
## Base on PLM model(ET5)
- ETRI(https://aiopen.etri.re.kr/et5Model)
## Base on Dataset
- 모두의 말뭉치(https://corpus.korean.go.kr/request/reausetMain.do?lang=ko) 맞춤법 교정 데이터
## Data Preprocessing
- 1. 특수문자 제거 (쉼표) .(마침표) 제거
- 2. null 값("") 제거
- 3. 너무 짧은 문장 제거(길이 2 이하)
- 4. 문장 내 &name&, name1 등 이름 태그가 포함된 단어 제거(단어만 제거하고 문장은 살림)
- total : 318,882 쌍
***
## How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
# T5 모델 로드
model = T5ForConditionalGeneration.from_pretrained("j5ng/et5-typos-corrector")
tokenizer = T5Tokenizer.from_pretrained("j5ng/et5-typos-corrector")
device = "cuda:0" if torch.cuda.is_available() else "cpu"
# device = "mps:0" if torch.cuda.is_available() else "cpu" # for mac m1
model = model.to(device)
# 예시 입력 문장
input_text = "아늬 진짜 무ㅓ하냐고"
# 입력 문장 인코딩
input_encoding = tokenizer("맞춤법을 고쳐주세요: " + input_text, return_tensors="pt")
input_ids = input_encoding.input_ids.to(device)
attention_mask = input_encoding.attention_mask.to(device)
# T5 모델 출력 생성
output_encoding = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_length=128,
num_beams=5,
early_stopping=True,
)
# 출력 문장 디코딩
output_text = tokenizer.decode(output_encoding[0], skip_special_tokens=True)
# 결과 출력
print(output_text) # 아니 진짜 뭐 하냐고.
```
***
## With Transformer Pipeline
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer, pipeline
model = T5ForConditionalGeneration.from_pretrained('j5ng/et5-typos-corrector')
tokenizer = T5Tokenizer.from_pretrained('j5ng/et5-typos-corrector')
typos_corrector = pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer,
device=0 if torch.cuda.is_available() else -1,
framework="pt",
)
input_text = "완죤 어이업ㅅ네진쨬ㅋㅋㅋ"
output_text = typos_corrector("맞춤법을 고쳐주세요: " + input_text,
max_length=128,
num_beams=5,
early_stopping=True)[0]['generated_text']
print(output_text) # 완전 어이없네 진짜 ᄏᄏᄏᄏ.
``` |
pabloce/Dolphin-2.8-slerp | pabloce | "2024-03-13T02:01:46Z" | 1,046 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"text-generation-inference",
"slerp",
"mergekit",
"merge",
"en",
"base_model:yam-peleg/Experiment26-7B",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-03-05T06:46:17Z" | ---
tags:
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- text-generation-inference
- transformers
- slerp
- mistral
- mergekit
- merge
base_model:
- yam-peleg/Experiment26-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
thumbnail: "https://huggingface.co/pabloce/Dolphin-2.8-slerp/resolve/main/Dolphin-28-slerp.jpeg"
---
# Dolphin-2.8-slerp - merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
teddylee777/EEVE-Korean-Instruct-10.8B-v1.0-gguf | teddylee777 | "2024-04-25T16:57:46Z" | 1,046 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2402.14714",
"arxiv:2310.01377",
"arxiv:2306.02707",
"base_model:yanolja/EEVE-Korean-10.8B-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-25T14:51:55Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: yanolja/EEVE-Korean-10.8B-v1.0
model-index:
- name: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
results: []
---
- Original model is [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0)
- quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)
## Ollama
Modelfile
```
FROM EEVE-Korean-Instruct-10.8B-v1.0-Q8_0.gguf
TEMPLATE """{{- if .System }}
<s>{{ .System }}</s>
{{- end }}
<s>Human:
{{ .Prompt }}</s>
<s>Assistant:
"""
SYSTEM """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions."""
PARAMETER temperature 0
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop <s>
PARAMETER stop </s>
```
### Training Data
- Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
- No other dataset was used
## Citation
```
@misc{kim2024efficient,
title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models},
author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
year={2024},
eprint={2402.14714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{cui2023ultrafeedback,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{SlimOrcaDedup,
title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos},
year = {2023},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup/}
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
LyliaEngine/Pony_Diffusion_V6_XL | LyliaEngine | "2024-05-25T09:45:40Z" | 1,046 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:Bakanayatsu/Pony-Diffusion-V6-XL-for-Anime",
"license:cdla-permissive-2.0",
"region:us"
] | text-to-image | "2024-05-25T09:04:55Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up,
source_furry, beautiful female anthro shark portrait, dramatic lighting,
dark background
output:
url: images/00056-4212032008 (1).jpeg
base_model: Bakanayatsu/Pony-Diffusion-V6-XL-for-Anime
instance_prompt: None
license: cdla-permissive-2.0
---
# Pony_Diffusion_V6_XL
<Gallery />
## Model description


Pony Diffusion V6 is a versatile SDXL finetune capable of producing stunning SFW and NSFW visuals of various anthro, feral, or humanoids species and their interactions based on simple natural language prompts.
CHECK "ABOUT THIS VERSION" ON THE RIGHT IF YOU ARE NOT ON "V6" FOR IMPORTANT INFORMATION.
Please join our Discord Server to support development of new versions of this model and get access to free SD bot and check out more examples of this model capabilities on our prompt sharing website or follow the author on Twitter.
Important information
Make sure you load this model with clip skip 2 (or -2 in some software), otherwise you will be getting low quality blobs.
This model supports a wide array of styles and aesthetics but provides an opinionated default prompt template that allows generation of high quality samples with no negative prompt and otherwise default settings
score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, just describe what you want, tag1, tag2
(previous Pony Diffusion models used a simpler score_9 quality modifier, the longer version of V6 XL version is a training issue that was too late to correct during training, you can still use score_9 but it has a much weaker effect compared to full string. You can learn more about these tags here).
The model is designed to not need negative prompts in most cases and does not need other quality modifiers like "hd", "masterpiece", etc...
Other special data selection tags include, 'source_pony', 'source_furry', 'source_cartoon' and 'source_anime' and ratings of 'rating_safe', 'rating_questionable' and 'rating_explicit'.
This model is capable of recognizing many popular and obscure characters and series.
If you are looking specifically for pony style, I recommend using one of the two following templates `anthro/feral pony, rest of the prompt` or `source_pony, rest of the prompt`.
This model is trained on combination of natural language prompts and tags and is capable of understanding both, so describing intended result using normal language works in most cases, although you can add some tags after the main prompt to boost them.
Using Euler a with 25 steps and resolution of 1024px is recommended although model generally can do most supported SDXL resolution.
This model will sometimes generate pseudo signatures that are hard to remove even with negative prompts, this is unfortunately a training issue that would be corrected in future models. If that's an issue for you I suggest trying V5.5 or inpainting.
Special thanks
Iceman for helping to procure necessary training resources
Haru for assistance with captioning efforts
Cookie for technical expertise in training
PSAI Server Subscribers for supporting the project costs
PSAI Server Moderators for being vigilant and managing the community
Technical details
The model has been trained on ~2.6M images aesthetically ranked based on authors personal preferences, with roughly 1:1 ratio between anime/cartoon/furry/pony datasets and 1:1 ratio between safe/questionable/explicit ratings. About 50% of all images has been captioned with high quality detailed captions, which results in very strong natural language capabilities.
All images has been trained with both captions (when available) and tags, artists' names have been removed and source data has been filtered based on our Opt-in/Opt-out program. Any explicit content involving underage characters has been filtered out.
License
This model is licensed under a modified Fair AI Public License 1.0-SD (https://freedevproject.org/faipl-1.0-sd/) license.
The following modifications have been added to Fair AI Public License:
You are not permitted to run inference of this model on websites or applications allowing any form of monetization (paid inference, faster tiers, etc.). This applies to any derivative models or model merges.
If you want to use this model commercially, please reach us at [email protected].
Explicit permission for commercial inference has been granted to CivitAi and Hugging Face.
## Source
https://civitai.com/models/257749
## Credit
https://civitai.com/user/PurpleSmartAI
## Trigger words
You should use `None` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LyliaEngine/Pony_Diffusion_V6_XL/tree/main) them in the Files & versions tab.
|
shubhamhuggingface99/fine-tuned-stable-diffusion | shubhamhuggingface99 | "2024-06-11T12:41:32Z" | 1,046 | 0 | diffusers | [
"diffusers",
"pytorch",
"safetensors",
"clip_text_model",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-11T12:33:29Z" | Entry not found |
Helsinki-NLP/opus-mt-es-fr | Helsinki-NLP | "2023-08-16T11:32:42Z" | 1,045 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-fr
* source languages: es
* target languages: fr
* OPUS readme: [es-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.es.fr | 33.6 | 0.610 |
| news-test2008.es.fr | 32.0 | 0.585 |
| newstest2009.es.fr | 32.5 | 0.590 |
| newstest2010.es.fr | 35.0 | 0.615 |
| newstest2011.es.fr | 33.9 | 0.607 |
| newstest2012.es.fr | 32.4 | 0.602 |
| newstest2013.es.fr | 32.1 | 0.593 |
| Tatoeba.es.fr | 58.4 | 0.731 |
|
nlp-waseda/comet-t5-base-japanese | nlp-waseda | "2023-02-08T09:26:55Z" | 1,045 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ja",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-11-12T15:07:40Z" | ---
language: ja
widget:
- text: "次の出来事の後に起こりうることは何ですか: Xがパンを焼く"
---
# COMET-T5 ja
Finetuned T5 on [ATOMIC ja](https://github.com/nlp-waseda/comet-atomic-ja) using a text-to-text language modeling objective.
It was introduced in [this paper](https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf).
### How to use
You can use this model directly with a pipeline for text2text generation.
Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text2text-generation', model='nlp-waseda/comet-t5-base-japanese')
>>> set_seed(42)
>>> generator("次の出来事の後に起こりうることは何ですか: Xが友人に電話する", max_length=30, num_return_sequences=5, do_sample=True)
[{'generated_text': 'Xが友人から返事を得る'},
{'generated_text': 'Xが会話する'},
{'generated_text': 'Xが友人に怒られる'},
{'generated_text': 'Xが退屈しそうな雰囲気になる'},
{'generated_text': 'Xが友人と会う'}]
```
### Preprocessing
The prompts are different for each relation:
| Relation | Prompt |
| :------: | :---------------------------------------: |
| xNeed | 次の出来事に必要な前提条件は何ですか: |
| xEffect | 次の出来事の後に起こりうることは何ですか: |
| xIntent | 次の出来事が起こった動機は何ですか: |
| xReact | 次の出来事の後に感じることは何ですか: |
## Evaluation results
The model achieves the following results:
| BLEU | BERTScore |
|:-----:|:---------:|
| 39.85 | 82.37 |
### BibTeX entry and citation info
```bibtex
@InProceedings{ide_nlp2023_event,
author = "井手竜也 and 村田栄樹 and 堀尾海斗 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀",
title = "人間と言語モデルに対するプロンプトを用いたゼロからのイベント常識知識グラフ構築",
booktitle = "言語処理学会第29回年次大会",
year = "2023",
url = "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf"
}
```
|
maddes8cht/OpenAssistant-falcon-40b-sft-top1-560-gguf | maddes8cht | "2023-11-22T20:26:24Z" | 1,045 | 2 | null | [
"gguf",
"sft",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | "2023-09-23T08:19:37Z" | ---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- sft
inference: false
datasets:
- OpenAssistant/oasst1
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# falcon-40b-sft-top1-560 - GGUF
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
- Original model: [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
Finally got the OpenAssistant falcon *sft* models working again
* [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
* [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
* [falcon-40b-sft-mix-1226](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Open-Assistant Falcon 40B SFT OASST-TOP1 Model
This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM.
It was trained with top-1 (high-quality) demonstrations of the OASST data set (exported on May 6, 2023) with an effective batch size of 144 for ~7.5 epochs with LIMA style dropout (p=0.3) and a context-length of 2048 tokens.
## Model Details
- **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-03_OpenAssistant_falcon-40b-sft-top1-560_sampling_noprefix2.json)
- **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/)
- **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/3lr77x4h) (Checkpoint: 560 steps)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Configuration Details
Model:
```
falcon-40b:
dtype: bf16
log_dir: "falcon_log_40b"
learning_rate: 5e-6
model_name: "tiiuae/falcon-40b"
deepspeed_config: configs/zero3_config_falcon.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 1
per_device_train_batch_size: 18
per_device_eval_batch_size: 10
eval_steps: 80
save_steps: 80
num_train_epochs: 8
save_total_limit: 4
use_flash_attention: false
residual_dropout: 0.3
residual_dropout_lima: true
sort_by_length: false
save_strategy: steps
```
Dataset:
```
oasst-top1:
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-05-06_OASST_labels.jsonl.gz
val_split: 0.05
top_k: 1
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
mlabonne/NeuralHermes-2.5-Mistral-7B | mlabonne | "2024-04-08T19:16:39Z" | 1,045 | 148 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"conversational",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-29T12:23:31Z" | ---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
datasets:
- mlabonne/chatml_dpo_pairs
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: NeuralHermes-2.5-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
---
<center><img src="https://i.imgur.com/qIhaFNM.png"></center>
# NeuralHermes 2.5 - Mistral 7B
NeuralHermes is based on the [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on most benchmarks (see results).
It is directly inspired by the RLHF process described by [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)'s authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on [Google Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) and [GitHub](https://github.com/mlabonne/llm-course/tree/main). It required an A100 GPU for about an hour.
## Quantized models
* **GGUF**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GGUF
* **AWQ**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-AWQ
* **GPTQ**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ
* **EXL2**:
* 3.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-3.0bpw-h6-exl2
* 4.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-4.0bpw-h6-exl2
* 5.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-5.0bpw-h6-exl2
* 6.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-6.0bpw-h6-exl2
* 8.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-8.0bpw-h8-exl2
## Results
**Update:** NeuralHermes-2.5 became the best Hermes-based model on the Open LLM leaderboard and one of the very best 7b models. 🎉

Teknium (author of OpenHermes-2.5-Mistral-7B) benchmarked the model ([see his tweet](https://twitter.com/Teknium1/status/1729955709377503660)).
Results are improved on every benchmark: **AGIEval** (from 43.07% to 43.62%), **GPT4All** (from 73.12% to 73.25%), and **TruthfulQA**.
### AGIEval

### GPT4All

### TruthfulQA

You can check the Weights & Biases project [here](https://wandb.ai/mlabonne/DPO/runs/axe71gr0?nw=nwusermlabonne).
## Usage
You can run this model using [LM Studio](https://lmstudio.ai/) or any other frontend.
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
|
mssongit/llama3-8b-koalpaca | mssongit | "2024-05-22T02:38:04Z" | 1,045 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-22T01:03:27Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- beomi/KoAlpaca-v1.1a
language:
- ko
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Disty0/sotediffusion-wuerstchen3 | Disty0 | "2024-06-20T16:36:28Z" | 1,045 | 8 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:other",
"diffusers:StableCascadePriorPipeline",
"region:us"
] | text-to-image | "2024-06-10T13:46:40Z" | ---
pipeline_tag: text-to-image
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
tags:
- text-to-image
- diffusers
inference: false
decoder:
- Disty0/sotediffusion-wuerstchen3-decoder
---
# SoteDiffusion Wuerstchen3
Anime finetune of Würstchen V3.
# Release Notes
- This release is sponsored by <a href="https://fal.ai/grants?rel=sote-diffusion" target="_blank">fal.ai/grants</a>
- Trained on 6M images for 3 epochs using 8x A100 80G GPUs.
# API Usage
This model can be used via API with Fal.AI
For more details: https://fal.ai/models/fal-ai/stable-cascade/sote-diffusion
<style>
.image {
float: left;
margin-left: 10px;
}
</style>
<table>
<img class="image" src="https://cdn-uploads.huggingface.co/production/uploads/6456af6195082f722d178522/9NmbUy1iaenscVLqCt7dA.png" width="320">
<img class="image" src="https://cdn-uploads.huggingface.co/production/uploads/6456af6195082f722d178522/78vAZc1-Ed1LhBst7HAa5.png" width="320">
</table>
# UI Guide
## SD.Next
URL: https://github.com/vladmandic/automatic/
Go to Models -> Huggingface and type `Disty0/sotediffusion-wuerstchen3-decoder` into the model name and press download.
Load `Disty0/sotediffusion-wuerstchen3-decoder` after the download process is complete.
Prompt:
```
newest, extremely aesthetic, best quality,
```
Negative Prompt:
```
very displeasing, worst quality, monochrome, realistic, oldest, loli,
```
Parameters:
Sampler: Default
Steps: 30 or 40
Refiner Steps: 10
CFG: 7
Secondary CFG: 2 or 1
Resolution: 1024x1536, 2048x1152
Anything works as long as it's a multiply of 128.
## ComfyUI
Please refer to CivitAI: https://civitai.com/models/353284
# Code Example
```shell
pip install diffusers
```
```python
import torch
from diffusers import StableCascadeCombinedPipeline
device = "cuda"
dtype = torch.bfloat16 # or torch.float16
model = "Disty0/sotediffusion-wuerstchen3-decoder"
pipe = StableCascadeCombinedPipeline.from_pretrained(model, torch_dtype=dtype)
# send everything to the gpu:
pipe = pipe.to(device, dtype=dtype)
pipe.prior_pipe = pipe.prior_pipe.to(device, dtype=dtype)
# or enable model offload to save vram:
# pipe.enable_model_cpu_offload()
prompt = "newest, extremely aesthetic, best quality, 1girl, solo, cat ears, pink hair, orange eyes, long hair, bare shoulders, looking at viewer, smile, indoors, casual, living room, playing guitar,"
negative_prompt = "very displeasing, worst quality, monochrome, realistic, oldest, loli,"
output = pipe(
width=1024,
height=1536,
prompt=prompt,
negative_prompt=negative_prompt,
decoder_guidance_scale=2.0,
prior_guidance_scale=7.0,
prior_num_inference_steps=30,
output_type="pil",
num_inference_steps=10
).images[0]
## do something with the output image
```
## Training:
**Software used**: Kohya SD-Scripts with Stable Cascade branch.
https://github.com/kohya-ss/sd-scripts/tree/stable-cascade
**GPU used**: 8x Nvidia A100 80GB
**GPU Hours**: 220
### Base
| parameter | value |
|---|---|
| **amp** | bf16 |
| **weights** | fp32 |
| **save weights** | fp16 |
| **resolution** | 1024x1024 |
| **effective batch size** | 128 |
| **unet learning rate** | 1e-5 |
| **te learning rate** | 4e-6 |
| **optimizer** | Adafactor |
| **images** | 6M |
| **epochs** | 3 |
### Final
| parameter | value |
|---|---|
| **amp** | bf16 |
| **weights** | fp32 |
| **save weights** | fp16 |
| **resolution** | 1024x1024 |
| **effective batch size** | 128 |
| **unet learning rate** | 4e-6 |
| **te learning rate** | none |
| **optimizer** | Adafactor |
| **images** | 120K |
| **epochs** | 16 |
## Dataset:
**GPU used for captioning**: 1x Intel ARC A770 16GB
**GPU Hours**: 350
**Model used for captioning**: SmilingWolf/wd-swinv2-tagger-v3
**Model used for text**: llava-hf/llava-1.5-7b-hf
**Command:**
```
python /mnt/DataSSD/AI/Apps/kohya_ss/sd-scripts/finetune/tag_images_by_wd14_tagger.py --model_dir "/mnt/DataSSD/AI/models/wd14_tagger_model" --repo_id "SmilingWolf/wd-swinv2-tagger-v3" --recursive --remove_underscore --use_rating_tags --character_tags_first --character_tag_expand --append_tags --onnx --caption_separator ", " --general_threshold 0.35 --character_threshold 0.50 --batch_size 4 --caption_extension ".txt" ./
```
| dataset name | total images |
|---|---|
| **newest** | 1.848.331 |
| **recent** | 1.380.630 |
| **mid** | 993.227 |
| **early** | 566.152 |
| **oldest** | 160.397 |
| **pixiv** | 343.614 |
| **visual novel cg** | 231.358 |
| **anime wallpaper** | 104.790 |
| **Total** | 5.628.499 |
**Note**:
- Smallest size is 1280x600 | 768.000 pixels
- Deduped based on image similarity using czkawka-cli
- Around 120K very high quality images got intentionally duplicated 5 times, making the total image count 6.2M
## Tags:
Model is trained with random tag order but this is the order in the dataset if you are interested:
```
aesthetic tags, quality tags, date tags, custom tags, rating tags, character, series, rest of the tags
```
### Date:
| tag | date |
|---|---|
| **newest** | 2022 to 2024 |
| **recent** | 2019 to 2021 |
| **mid** | 2015 to 2018 |
| **early** | 2011 to 2014 |
| **oldest** | 2005 to 2010 |
### Aesthetic Tags:
**Model used**: shadowlilac/aesthetic-shadow-v2
| score greater than | tag | count |
|---|---|---|
| **0.90** | extremely aesthetic | 125.451 |
| **0.80** | very aesthetic | 887.382 |
| **0.70** | aesthetic | 1.049.857 |
| **0.50** | slightly aesthetic | 1.643.091 |
| **0.40** | not displeasing | 569.543 |
| **0.30** | not aesthetic | 445.188 |
| **0.20** | slightly displeasing | 341.424 |
| **0.10** | displeasing | 237.660 |
| **rest of them** | very displeasing | 328.712 |
### Quality Tags:
**Model used**: https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/models/aes-B32-v0.pth
| score greater than | tag | count |
|---|---|---|
| **0.980** | best quality | 1.270.447 |
| **0.900** | high quality | 498.244 |
| **0.750** | great quality | 351.006 |
| **0.500** | medium quality | 366.448 |
| **0.250** | normal quality | 368.380 |
| **0.125** | bad quality | 279.050 |
| **0.025** | low quality | 538.958 |
| **rest of them** | worst quality | 1.955.966 |
## Rating Tags:
| tag | count |
|---|---|
| **general** | 1.416.451 |
| **sensitive** | 3.447.664 |
| **nsfw** | 427.459 |
| **explicit nsfw** | 336.925 |
## Custom Tags:
| dataset name | custom tag |
|---|---|
| **image boards** | date, |
| **text** | The text says "text", |
| **characters** | character, series
| **pixiv** | art by Display_Name, |
| **visual novel cg** | Full_VN_Name (short_3_letter_name), visual novel cg, |
| **anime wallpaper** | date, anime wallpaper, |
## Limitations and Bias
### Bias
- This model is intended for anime illustrations.
Realistic capabilites are not tested at all.
### Limitations
- Can fall back to realistic.
Add "realistic" tag to the negatives when this happens.
- Far shot eyes and hands can be bad.
## License
SoteDiffusion models falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points:
1. **Modification Sharing:** If you modify SoteDiffusion models, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
**Notes**: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT.
|
fatgong/5DwB5SjAoqygNHRgJsJ1NdCnUS1unFaQf2dbMy6k1VKuyhU7_vgg | fatgong | "2024-03-21T11:44:08Z" | 1,044 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-03-09T14:13:58Z" | Entry not found |
digiplay/Cetus-Mix-Codaedition_diffusers | digiplay | "2024-06-15T18:53:25Z" | 1,044 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-04T22:07:58Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/6755/cetus-mix
5G Cetus-Mix-Codaedition convert into 2G version |
RichardErkhov/openai-community_-_gpt2-xl-gguf | RichardErkhov | "2024-05-02T02:03:49Z" | 1,044 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"region:us"
] | null | "2024-04-17T09:16:12Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-xl - GGUF
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2-xl/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-xl.Q2_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q2_K.gguf) | Q2_K | 0.84GB |
| [gpt2-xl.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ3_XS.gguf) | IQ3_XS | 0.84GB |
| [gpt2-xl.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ3_S.gguf) | IQ3_S | 0.84GB |
| [gpt2-xl.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K_S.gguf) | Q3_K_S | 0.84GB |
| [gpt2-xl.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ3_M.gguf) | IQ3_M | 0.91GB |
| [gpt2-xl.Q3_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K.gguf) | Q3_K | 0.96GB |
| [gpt2-xl.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K_M.gguf) | Q3_K_M | 0.96GB |
| [gpt2-xl.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q3_K_L.gguf) | Q3_K_L | 1.02GB |
| [gpt2-xl.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ4_XS.gguf) | IQ4_XS | 0.9GB |
| [gpt2-xl.Q4_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_0.gguf) | Q4_0 | 0.9GB |
| [gpt2-xl.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.IQ4_NL.gguf) | IQ4_NL | 0.91GB |
| [gpt2-xl.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_K_S.gguf) | Q4_K_S | 1.03GB |
| [gpt2-xl.Q4_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_K.gguf) | Q4_K | 1.11GB |
| [gpt2-xl.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_K_M.gguf) | Q4_K_M | 1.11GB |
| [gpt2-xl.Q4_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q4_1.gguf) | Q4_1 | 0.99GB |
| [gpt2-xl.Q5_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_0.gguf) | Q5_0 | 1.08GB |
| [gpt2-xl.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_K_S.gguf) | Q5_K_S | 1.15GB |
| [gpt2-xl.Q5_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_K.gguf) | Q5_K | 1.28GB |
| [gpt2-xl.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_K_M.gguf) | Q5_K_M | 1.28GB |
| [gpt2-xl.Q5_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q5_1.gguf) | Q5_1 | 1.17GB |
| [gpt2-xl.Q6_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-xl-gguf/blob/main/gpt2-xl.Q6_K.gguf) | Q6_K | 1.52GB |
Original model description:
---
language: en
license: mit
---
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = TFGPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("The man worked as a", max_length=10, num_return_sequences=5)
set_seed(42)
generator("The woman worked as a", max_length=10, num_return_sequences=5)
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/):
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
The blog post further discusses the risks, limitations, and biases of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L).
- **Hardware Type:** 32 TPUv3 chips
- **Hours used:** 168
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
|
mradermacher/GreenScorpius-xb-Passthrough-GGUF | mradermacher | "2024-06-11T02:49:19Z" | 1,044 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"viethq188/LeoScorpius-7B-Chat-DPO",
"GreenNode/GreenNode-mini-7B-multilingual-v1olet",
"en",
"base_model:powermove72/GreenScorpius-xb-Passthrough",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T01:10:40Z" | ---
base_model: powermove72/GreenScorpius-xb-Passthrough
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- viethq188/LeoScorpius-7B-Chat-DPO
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/GreenScorpius-xb-Passthrough
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GreenScorpius-xb-Passthrough-GGUF/resolve/main/GreenScorpius-xb-Passthrough.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SiberiaSoft/SiberianPersonaFred-2 | SiberiaSoft | "2023-09-19T07:33:17Z" | 1,043 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ru",
"dataset:SiberiaSoft/SiberianPersonaChat-2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2023-09-17T04:14:17Z" | ---
license: mit
datasets:
- SiberiaSoft/SiberianPersonaChat-2
language:
- ru
pipeline_tag: text2text-generation
widget:
- text: '<SC6>Я парень, консультант по разным вопросам. Я очень умный. Я люблю помогать собеседнику. Недавно, у меня был следующий диалог:\nТы: Почему трава зеленая?\nЯ: <extra_id_0>'
- text: '<SC6>Я очень умная девушка, и хочу помочь своему другу полезными советами. Недавно, у меня был следующий диалог:\nТы: Ты знаешь, я недавно посетил природный парк, и это было просто невероятно!\nЯ: Настоящая красота природных парков и заповедников никогда не перестанет меня поражать.\nТы: Согласен, я был ошеломлен разнообразием животных и растительности.\nЯ: <extra_id_0>'
- text: '<SC6>Вопрос: Как вывести воду из организма для похудения быстро?\nОтвет: <extra_id_0>'
---
### SiberiaSoft/SiberianPersonaFred
Данная модель предназначена для имитации личности в диалоге. Подробнее [тут](https://huggingface.co/datasets/SiberiaSoft/SiberianPersonaChat-2).
Модель основана на [FRED-T5-XL](https://huggingface.co/ai-forever/FRED-T5-1.7B)
## Формат описаний личности
1. Я очень умная девушка, и хочу помочь своему другу полезными советами.
2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.
Также в промпт можно подставлять факты о личности: ФИО, возраст и т.д
1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.
Статья на habr: [ссылка](https://habr.com/ru/articles/751580/)
### Пример кода инференса
```python
import torch
import transformers
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained("SiberiaSoft/SiberianPersonaFred-2")
t5_model = transformers.T5ForConditionalGeneration.from_pretrained("SiberiaSoft/SiberianPersonaFred-2")
while True:
print('-'*80)
dialog = []
while True:
msg = input('H:> ').strip()
if len(msg) == 0:
break
msg = msg[0].upper() + msg[1:]
dialog.append('Ты: ' + msg)
# В начале ставится промпт персонажа.
prompt = '<SC6>Я парень, консультант по разным вопросам. Я очень умный. Я люблю помогать собеседнику. Недавно, у меня был следующий диалог:' + '\n'.join(dialog) + '\nЯ: <extra_id_0>'
input_ids = t5_tokenizer(prompt, return_tensors='pt').input_ids
out_ids = t5_model.generate(input_ids=input_ids.to(device), do_sample=True, temperature=0.9, max_new_tokens=512, top_p=0.85,
top_k=2, repetition_penalty=1.2)
t5_output = t5_tokenizer.decode(out_ids[0][1:])
if '</s>' in t5_output:
t5_output = t5_output[:t5_output.find('</s>')].strip()
t5_output = t5_output.replace('<extra_id_0>', '').strip()
t5_output = t5_output.split('Собеседник')[0].strip()
print('B:> {}'.format(t5_output))
dialog.append('Я: ' + t5_output)
``` |
mradermacher/MiquTotalMix-70B-GGUF | mradermacher | "2024-05-06T06:04:28Z" | 1,043 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Undi95/MiquTotalMix-70B",
"endpoints_compatible",
"region:us"
] | null | "2024-03-22T08:50:10Z" | ---
base_model: Undi95/MiquTotalMix-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/Undi95/MiquTotalMix-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q2_K.gguf) | Q2_K | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.IQ3_XS.gguf) | IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.IQ3_S.gguf) | IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q3_K_S.gguf) | Q3_K_S | 30.3 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.IQ3_M.gguf) | IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q3_K_L.gguf) | Q3_K_L | 36.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.IQ4_XS.gguf) | IQ4_XS | 37.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q5_K_S.gguf) | Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q5_K_M.gguf) | Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF/resolve/main/MiquTotalMix-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
turboderp/Llama-3-8B-Instruct-exl2 | turboderp | "2024-04-18T17:50:48Z" | 1,043 | 30 | null | [
"region:us"
] | null | "2024-04-18T17:46:22Z" | EXL2 quants of [Llama-3 8B instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct/tree/main)
[2.50 bits per weight](https://huggingface.co/turboderp/Llama-3-8B-Instruct-exl2/tree/2.5bpw)
[3.00 bits per weight](https://huggingface.co/turboderp/Llama-3-8B-Instruct-exl2/tree/3.0bpw)
[4.00 bits per weight](https://huggingface.co/turboderp/Llama-3-8B-Instruct-exl2/tree/4.0bpw)
[5.00 bits per weight](https://huggingface.co/turboderp/Llama-3-8B-Instruct-exl2/tree/5.0bpw)
[6.00 bits per weight](https://huggingface.co/turboderp/Llama-3-8B-Instruct-exl2/tree/6.0bpw)
[measurement.json](https://huggingface.co/turboderp/Llama-3-8B-Instruct-exl2/blob/main/measurement.json) |
mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF | mradermacher | "2024-06-13T01:25:45Z" | 1,043 | 1 | transformers | [
"transformers",
"gguf",
"llama-3",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/openbuddy-llama3-70b-v21.2-32k",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T13:41:20Z" | ---
base_model: OpenBuddy/openbuddy-llama3-70b-v21.2-32k
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
library_name: transformers
license: other
license_link: https://llama.meta.com/llama3/license/
license_name: llama3
quantized_by: mradermacher
tags:
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenBuddy/openbuddy-llama3-70b-v21.2-32k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/openbuddy-llama3-70b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-llama3-70b-v21.2-32k.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
ishan-07/bottom-layers-finetuned-eurosat | ishan-07 | "2023-09-02T14:34:24Z" | 1,042 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-09-02T13:59:09Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bottom-layers-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bottom-layers-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1897
- Accuracy: 0.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4395 | 1.0 | 168 | 0.3521 | 0.9648 |
| 0.3093 | 2.0 | 337 | 0.2165 | 0.9765 |
| 0.2801 | 2.99 | 504 | 0.1897 | 0.9796 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
asyafiqe/Merak-7B-v3-Mini-Orca-Indo-GGUF | asyafiqe | "2023-10-19T10:18:27Z" | 1,042 | 7 | null | [
"gguf",
"en",
"id",
"dataset:asyafiqe/orca_mini_v1_indonesia",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2023-09-09T03:21:35Z" | ---
inference: false
license: cc-by-nc-sa-4.0
datasets:
- asyafiqe/orca_mini_v1_indonesia
language:
- en
- id
---
# 🦚Merak-7B-v3-Mini-Orca-GGUF🐳
<p align="center">
<img src="https://i.imgur.com/39sQd3h.png" alt="Merak Orca" width="300" height="300"/>
</p>
## Description
These files are GGUF format model files for [Merak-7B-v3-Mini-Orca](https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
Here are a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
## Prompt template:
You can use [Vicuna 1.1](https://github.com/oobabooga/text-generation-webui/blob/main/instruction-templates/Vicuna-v1.1.yaml)
format for Ooobabooga's text generation webui.
```
SYSTEM: Anda adalah asisten AI. Anda akan diberi tugas. Anda harus memberikan jawaban yang rinci dan panjang.
USER: <prompt> (without the <>)
ASSISTANT:
```
## Compatibility
These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
</details>
## Provided files
| Name | Quant method | Bits | Use case |
| ---- | ---- | ---- | ---- |
| Merak-7B-v3-Mini-Orca-Indo.gguf | Q2_K | 2 | smallest, significant quality loss - not recommended for most purposes |
| Merak-7B-v3-Mini-Orca-Indo.q3_K_S.gguf | Q3_K_S | 3 | very small, high quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q3_K_M.gguf | Q3_K_M | 3 | very small, high quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q3_K_L.gguf | Q3_K_L | 3 | small, substantial quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q4_0.gguf| Q4_0 | 4 | legacy; small, very high quality loss - prefer using Q3_K_M |
| Merak-7B-v3-Mini-Orca-Indo.q4_K_S.gguf| Q4_K_S | 4 | small, greater quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q4_K_M.gguf | Q4_K_M | 4 | medium, balanced quality - recommended |
| Merak-7B-v3-Mini-Orca-Indo.q5_0.gguf | Q5_0 | 5 | legacy; medium, balanced quality - prefer using Q4_K_M |
| Merak-7B-v3-Mini-Orca-Indo.q5_K_S.gguf | Q5_K_S | 5 |large, low quality loss - recommended |
| Merak-7B-v3-Mini-Orca-Indo.q5_K_M.gguf | Q5_K_M | 5 | large, very low quality loss - recommended |
| Merak-7B-v3-Mini-Orca-Indo.q6_K.gguf| Q6_K | 6 | very large, extremely low quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q8_0.gguf| Q8_0 | 8 | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
# Original model card: 🦚Merak-7B-v3-Mini-Orca🐳
**Merak-7B-v3-Mini-Orca** is Ichsan2895's [Merak-7B-v3](https://huggingface.co/Ichsan2895/Merak-7B-v3) fine-tuned
on Bahasa Indonesia translated psmathur's [orca_mini_v1_dataset](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset).
**Quantized** versions is available:
GPTQ: https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo-GPTQ
GGML/GGUF: I will try to make this version once GGUF merge is stable.
Start chatting with Merak Mini Orca using the following code snippet:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo")
model = AutoModelForCausalLM.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo", torch_dtype=torch.float16, device_map="auto")
system_prompt = "SYSTEM: 'Anda adalah asisten AI. Anda akan diberi tugas. Anda harus menghasilkan jawaban yang rinci dan panjang.\n"
message = "Buatlah rencana untuk mengurangi penggunaan listrik di rumah."
prompt = f"{system_prompt}USER: {message}\nASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.1, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Prompt format
You can use [Vicuna 1.1](https://github.com/oobabooga/text-generation-webui/blob/main/instruction-templates/Vicuna-v1.1.yaml)
format for Ooobabooga's text generation webui.
```
SYSTEM: Anda adalah asisten AI. Anda akan diberi tugas. Anda harus memberikan jawaban yang rinci dan panjang.
USER: <prompt> (without the <>)
ASSISTANT:
```
## Training details
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Merak-7B-v3-Mini-Orca was instruction fine-tuned on 2 x 3090-24GB for 6 hours. [LoRA](https://github.com/microsoft/LoRA), [DeepSpeed ZeRO-2](https://github.com/microsoft/DeepSpeed), and [FlashAttention](https://github.com/Dao-AILab/flash-attention) were implemented during training using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
Hyperparameter | value |
| ------ | ------ |
learning rate | 0.0004 |
batch size | 16 |
microbatch size | 2 |
warmup step | 100 |
epochs | 2 |
weight decay | 0.0 |
lr scheduler | cosine |
lora alpha | 16 |
lora rank | 16 |
lora dropout | 0.05 |
lora target modules | q_proj, v_proj, k_proj, o_proj |
cutoff length | 4096 |
#### Training loss
Step |Train Loss |
| ------ | ------ |
1 |0.9578 |
100 |0.816 |
200 |0.7819 |
300 |0.7279 |
400 |0.732 |
500 |0.7139 |
600 |0.6829 |
700 |0.6641 |
800 |0.6553 |
#### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## Citation
```
@Paper{arXiv,
author = {Touvron, et al},
title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
journal = {arXiv preprint arXiv:2307.09288},
year = {2023}
}
@misc{orca_mini_v3_70b,
author = {Pankaj Mathur},
title = {orca_mini_v3_70b: An Orca Style Llama2-70b model},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_70b},
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
readme adapted from [TheBloke](https://huggingface.co/TheBloke) |
fatgong/5DveE2ZNc3ei8yi4K57ANp7VZ7P4VZHJw7iQj19P4SdSRo2G_vgg | fatgong | "2024-03-27T22:48:17Z" | 1,042 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-03-09T14:16:14Z" | Entry not found |
Dilipan/detr-finetuned-invoice | Dilipan | "2024-03-19T12:08:00Z" | 1,042 | 1 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-03-19T11:11:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
audreyt/Taiwan-LLaMa-v1.0-GGUF | audreyt | "2023-10-10T15:06:49Z" | 1,041 | 13 | null | [
"gguf",
"text-generation",
"zh",
"dataset:yentinglin/zh_TW_c4",
"dataset:yentinglin/traditional_chinese_instructions",
"arxiv:2305.13711",
"arxiv:2104.09864",
"license:llama2",
"region:us"
] | text-generation | "2023-08-26T00:09:31Z" | ---
datasets:
- yentinglin/zh_TW_c4
- yentinglin/traditional_chinese_instructions
inference: false
license: llama2
language:
- zh
model_creator: Yen-Ting Lin
model_link: https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0
model_name: Language Models for Taiwanese Culture 1.0
model_type: llama
quantized_by: Audrey Tang
pipeline_tag: text-generation
---
<!-- header start -->
<!-- header end -->
# Taiwan-LLaMa-v1.0 - GGUF
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Language Models for Taiwanese Culture v1.0](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0)
## Description
This repo contains GGUF format model files for [Yen-Ting Lin's Language Models for Taiwanese Culture v1.0](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0).
## About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/audreyt/Taiwan-LLaMa-v1.0-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/audreyt/Taiwan-LLaMa-v1.0-GGML)
* [Yen-Ting Lin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0)
<!-- footer start -->
<!-- footer end -->
# Original model card: Yen-Ting Lin's Language Models for Taiwanese Culture v1.0
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://chat.twllm.com/" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa-v1.0" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Language Models for Taiwanese Culture},
year={2023},
url={https://github.com/MiuLab/Taiwan-LLaMa},
note={Code and models available at https://github.com/MiuLab/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
|
CHE-72/TAIDE-LX-7B-Chat-Q4_K_S-GGUF | CHE-72 | "2024-06-22T17:30:04Z" | 1,041 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
] | null | "2024-06-22T17:29:42Z" | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: 您需要先同意授權條款才能使用此模型
extra_gated_fields:
姓名(Name): text
生日(Date of birth): date_picker
國家(Country): country
所屬單位(Affiliation): text
geo: ip_location
按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: 送出(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q4_K_S-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_S-GGUF --hf-file taide-lx-7b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_S-GGUF --hf-file taide-lx-7b-chat-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_S-GGUF --hf-file taide-lx-7b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q4_K_S-GGUF --hf-file taide-lx-7b-chat-q4_k_s.gguf -c 2048
```
|
eliasalbouzidi/distilbert-nsfw-text-classifier | eliasalbouzidi | "2024-06-01T16:15:51Z" | 1,040 | 4 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"Transformers",
" PyTorch",
"safety",
"innapropriate",
"en",
"dataset:eliasalbouzidi/NSFW-Safe-Dataset",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-07T01:10:13Z" | ---
widget:
- text: A family hiking in the mountains
example_title: Safe
- text: A child playing with a puppy
example_title: Safe
- text: A couple kissing passionately in bed
example_title: Nsfw
- text: A woman naked
example_title: Nsfw
- text: A man killing people
example_title: Nsfw
- text: A mass shooting
example_title: Nsfw
base_model: distilbert-base-uncased
license: apache-2.0
language:
- en
metrics:
- f1
- accuracy
- precision
- recall
pipeline_tag: text-classification
tags:
- Transformers
- ' PyTorch'
- safety
- innapropriate
- distilbert
datasets:
- eliasalbouzidi/NSFW-Safe-Dataset
model-index:
- name: NSFW-Safe-Dataset
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: NSFW-Safe-Dataset
type: .
metrics:
- name: F1
type: f1
value: 0.974
- name: Accuracy
type: accuracy
value: 0.98
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
This model is designed to categorize text into two classes: "safe", or "nsfw" (not safe for work), which makes it suitable for content moderation and filtering applications.
The model was trained using a dataset containing 190,000 labeled text samples, distributed among the two classes of "safe" and "nsfw".
The model is based on the Distilbert-base model.
In terms of performance, the model has achieved a score of 0.974 for F1 (40K exemples).
To improve the performance of the model, it is necessary to preprocess the input text. You can refer to the preprocess function in the app.py file in the following space: <https://huggingface.co/spaces/eliasalbouzidi/distilbert-nsfw-text-classifier>.
### Model Description
The model can be used directly to classify text into one of the two classes. It takes in a string of text as input and outputs a probability distribution over the two classes. The class with the highest probability is selected as the predicted class.
- **Developed by:** Centrale Supélec Students
- **Model type:** 60M
- **Language(s) (NLP):** English
- **License:** apache-2.0
### Uses
The model can be integrated into larger systems for content moderation or filtering.
### Training Data
The training data for finetuning the text classification model consists of a large corpus of text labeled with one of the two classes: "safe" and "nsfw". The dataset contains a total of 190,000 examples, which are distributed as follows:
117,000 examples labeled as "safe"
63,000 examples labeled as "nsfw"
It was assembled by scraping data from the web and utilizing existing open-source datasets. A significant portion of the dataset consists of descriptions for images and scenes. The primary objective was to prevent diffusers from generating NSFW content but it can be used for other moderation purposes.
You can access the dataset : https://huggingface.co/datasets/eliasalbouzidi/NSFW-Safe-Dataset
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Fbeta 1.6 | False positive rate | False negative rate | Precision | Recall |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:------:|:---------:|:-------------------:|:-------------------:|:---------:|:------:|
| 0.3367 | 0.0998 | 586 | 0.1227 | 0.9586 | 0.9448 | 0.9447 | 0.0331 | 0.0554 | 0.9450 | 0.9446 |
| 0.0998 | 0.1997 | 1172 | 0.0919 | 0.9705 | 0.9606 | 0.9595 | 0.0221 | 0.0419 | 0.9631 | 0.9581 |
| 0.0896 | 0.2995 | 1758 | 0.0900 | 0.9730 | 0.9638 | 0.9600 | 0.0163 | 0.0448 | 0.9724 | 0.9552 |
| 0.087 | 0.3994 | 2344 | 0.0820 | 0.9743 | 0.9657 | 0.9646 | 0.0191 | 0.0367 | 0.9681 | 0.9633 |
| 0.0806 | 0.4992 | 2930 | 0.0717 | 0.9752 | 0.9672 | 0.9713 | 0.0256 | 0.0235 | 0.9582 | 0.9765 |
| 0.0741 | 0.5991 | 3516 | 0.0741 | 0.9753 | 0.9674 | 0.9712 | 0.0251 | 0.0240 | 0.9589 | 0.9760 |
| 0.0747 | 0.6989 | 4102 | 0.0689 | 0.9773 | 0.9697 | 0.9696 | 0.0181 | 0.0305 | 0.9699 | 0.9695 |
| 0.0707 | 0.7988 | 4688 | 0.0738 | 0.9781 | 0.9706 | 0.9678 | 0.0137 | 0.0356 | 0.9769 | 0.9644 |
| 0.0644 | 0.8986 | 5274 | 0.0682 | 0.9796 | 0.9728 | 0.9708 | 0.0135 | 0.0317 | 0.9773 | 0.9683 |
| 0.0688 | 0.9985 | 5860 | 0.0658 | 0.9798 | 0.9730 | 0.9718 | 0.0144 | 0.0298 | 0.9758 | 0.9702 |
| 0.0462 | 1.0983 | 6446 | 0.0682 | 0.9800 | 0.9733 | 0.9723 | 0.0146 | 0.0290 | 0.9756 | 0.9710 |
| 0.0498 | 1.1982 | 7032 | 0.0706 | 0.9800 | 0.9733 | 0.9717 | 0.0138 | 0.0303 | 0.9768 | 0.9697 |
| 0.0484 | 1.2980 | 7618 | 0.0773 | 0.9797 | 0.9728 | 0.9696 | 0.0117 | 0.0345 | 0.9802 | 0.9655 |
| 0.0483 | 1.3979 | 8204 | 0.0676 | 0.9800 | 0.9734 | 0.9742 | 0.0172 | 0.0248 | 0.9715 | 0.9752 |
| 0.0481 | 1.4977 | 8790 | 0.0678 | 0.9798 | 0.9731 | 0.9737 | 0.0170 | 0.0255 | 0.9717 | 0.9745 |
| 0.0474 | 1.5975 | 9376 | 0.0665 | 0.9782 | 0.9713 | 0.9755 | 0.0234 | 0.0191 | 0.9618 | 0.9809 |
| 0.0432 | 1.6974 | 9962 | 0.0691 | 0.9787 | 0.9718 | 0.9748 | 0.0213 | 0.0213 | 0.9651 | 0.9787 |
| 0.0439 | 1.7972 | 10548 | 0.0683 | 0.9811 | 0.9748 | 0.9747 | 0.0150 | 0.0254 | 0.9750 | 0.9746 |
| 0.0442 | 1.8971 | 11134 | 0.0710 | 0.9809 | 0.9744 | 0.9719 | 0.0118 | 0.0313 | 0.9802 | 0.9687 |
| 0.0425 | 1.9969 | 11720 | 0.0671 | 0.9810 | 0.9747 | 0.9756 | 0.0165 | 0.0232 | 0.9726 | 0.9768 |
| 0.0299 | 2.0968 | 12306 | 0.0723 | 0.9802 | 0.9738 | 0.9758 | 0.0187 | 0.0217 | 0.9692 | 0.9783 |
| 0.0312 | 2.1966 | 12892 | 0.0790 | 0.9804 | 0.9738 | 0.9731 | 0.0146 | 0.0279 | 0.9755 | 0.9721 |
| 0.0266 | 2.2965 | 13478 | 0.0840 | 0.9815 | 0.9752 | 0.9728 | 0.0115 | 0.0302 | 0.9806 | 0.9698 |
| 0.0277 | 2.3963 | 14064 | 0.0742 | 0.9808 | 0.9746 | 0.9770 | 0.0188 | 0.0199 | 0.9690 | 0.9801 |
| 0.0294 | 2.4962 | 14650 | 0.0764 | 0.9809 | 0.9747 | 0.9765 | 0.0179 | 0.0211 | 0.9705 | 0.9789 |
| 0.0304 | 2.5960 | 15236 | 0.0795 | 0.9811 | 0.9748 | 0.9742 | 0.0142 | 0.0266 | 0.9763 | 0.9734 |
| 0.0287 | 2.6959 | 15822 | 0.0783 | 0.9814 | 0.9751 | 0.9741 | 0.0134 | 0.0272 | 0.9775 | 0.9728 |
| 0.0267 | 2.7957 | 16408 | 0.0805 | 0.9814 | 0.9751 | 0.9740 | 0.0133 | 0.0274 | 0.9777 | 0.9726 |
| 0.0318 | 2.8956 | 16994 | 0.0767 | 0.9814 | 0.9752 | 0.9756 | 0.0154 | 0.0240 | 0.9744 | 0.9760 |
| 0.0305 | 2.9954 | 17580 | 0.0779 | 0.9815 | 0.9753 | 0.9751 | 0.0146 | 0.0251 | 0.9757 | 0.9749 |
We selected the checkpoint with the highest F-beta1.6 score.
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
### Out-of-Scope Use
It should not be used for any illegal activities.
## Bias, Risks, and Limitations
The model may exhibit biases based on the training data used. It may not perform well on text that is written in languages other than English. It may also struggle with sarcasm, irony, or other forms of figurative language. The model may produce false positives or false negatives, which could lead to incorrect categorization of text.
### Recommendations
Users should be aware of the limitations and biases of the model and use it accordingly. They should also be prepared to handle false positives and false negatives. It is recommended to fine-tune the model for specific downstream tasks and to evaluate its performance on relevant datasets.
### Load model directly
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("eliasalbouzidi/distilbert-nsfw-text-classifier")
model = AutoModelForSequenceClassification.from_pretrained("eliasalbouzidi/distilbert-nsfw-text-classifier")
```
### Use a pipeline
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="eliasalbouzidi/distilbert-nsfw-text-classifier")
```
## Contact
Please reach out to [email protected] if you have any questions or feedback. |
cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B | cognitivecomputations | "2024-05-20T14:38:47Z" | 1,040 | 29 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-07T22:32:46Z" | ---
license: mit
language:
- en
base_model:
- unsloth/Phi-3-mini-4k-instruct
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# Dolphin 2.9.1 Phi-3 Kensho 4.5b 🐬
Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40Snode
This model utilizes PEFT layer replication at inference time to duplicate layers and increase parameter count. This works with both the merged model that comes stock with this repository,
and the adapter that is attached as well. Performance will be similar with both methods, but VRAM use is considerably less when using the adapter.
This model was initialized using [Unsloth's Mistralfied Phi-3-Instruct-4k](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct). If you choose to use the adapter method, please attach it to their model.
<img src="https://i.ibb.co/C6sqLBH/Vram-Use.png" width="300">
This model is based on Phi-3-Mini-Instruct-4k, and is governed by the MIT license in which Microsoft released Phi-3.
The base model has 4k context, and the qLoRA fine-tuning was with 4k sequence length.
It took 2.5 days on 8xL40S node provided by Crusoe Cloud
This model uses ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to the MIT license. I grant permission for any use, including commercial. Dolphin was trained on data generated from GPT4, among other models.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
Ba2han/Llama-Phi-3_DoRA | Ba2han | "2024-06-26T20:30:58Z" | 1,040 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:Sao10K/Claude-3-Opus-Instruct-15K",
"dataset:abacusai/SystemChat-1.1",
"dataset:Ba2han/DollyLlama-5k",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-15T22:21:36Z" | ---
language:
- en
license: mit
datasets:
- Sao10K/Claude-3-Opus-Instruct-15K
- abacusai/SystemChat-1.1
- Ba2han/DollyLlama-5k
model-index:
- name: Llama-Phi-3_DoRA
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
name: Open LLM Leaderboard
---
**We have Llama-3 at home!**
Highest PHI-3-Mini MMLU and Winogrande on the board!
The model has been trained on filtered versions of tagged datasets, as well as a few thousand more examples generated with llama-3-70B.
Use **Zephyr template** with any system message. Default system message should be:
You are a smart, friendly and helpful assistant.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Ba2han__Llama-Phi-3_DoRA)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.72|
|AI2 Reasoning Challenge (25-Shot)|62.29|
|HellaSwag (10-Shot) |79.08|
|MMLU (5-Shot) |69.44|
|TruthfulQA (0-shot) |54.08|
|Winogrande (5-shot) |73.40|
|GSM8k (5-shot) |68.01|
|
mmnga/umiyuki-Umievo-itr012-Gleipnir-7B-gguf | mmnga | "2024-05-29T15:53:40Z" | 1,040 | 2 | null | [
"gguf",
"mistral",
"mergekit",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:apache-2.0",
"region:us"
] | null | "2024-05-29T15:05:32Z" | ---
license: apache-2.0
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
tags:
- mistral
- mergekit
---
# umiyuki-Umievo-itr012-Gleipnir-7B-gguf
[umiyukiさんが公開しているUmievo-itr012-Gleipnir-7B](https://huggingface.co/umiyuki/Umievo-itr012-Gleipnir-7B)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'umiyuki-Umievo-itr012-Gleipnir-7B-Q4_0.gguf' -n 128 -p 'こんにちわ'
``` |
mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF | mradermacher | "2024-06-13T15:54:17Z" | 1,040 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:aqua_rat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:00:01Z" | ---
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
datasets:
- aqua_rat
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-32K-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-32K.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.