modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
kimdeokgi/dpo_model_test1 | kimdeokgi | 2024-05-11T08:03:48Z | 670 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-11T05:49:10Z | ---
license: apache-2.0
language:
- en
---
# kimdeokgi/dpo_model_test1
# **Introduction**
This model is test version, alignment-tuned model.
We utilize state-of-the-art instruction fine-tuning methods including direct preference optimization (DPO).
After DPO training, we linearly merged models to boost performance. |
mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF | mradermacher | 2024-06-26T21:05:14Z | 670 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:saishf/Aura-Uncensored-OAS-8B-L3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T17:07:28Z | ---
base_model: saishf/Aura-Uncensored-OAS-8B-L3
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/saishf/Aura-Uncensored-OAS-8B-L3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Aura-Uncensored-OAS-8B-L3-GGUF/resolve/main/Aura-Uncensored-OAS-8B-L3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/quill-72b-instruct-i1-GGUF | mradermacher | 2024-05-31T09:02:41Z | 670 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:billyjoe/quill-72b-instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-29T19:52:43Z | ---
base_model: billyjoe/quill-72b-instruct
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: qianwen-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/billyjoe/quill-72b-instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/quill-72b-instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-instruct-i1-GGUF/resolve/main/quill-72b-instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
bunkalab/Phi-3-mini-128k-instruct-ReversedLinearBunkaScore-4.6k-DPO | bunkalab | 2024-06-03T19:28:37Z | 670 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-03T15:52:20Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingartists/100-gecs | huggingartists | 2021-12-22T15:23:59Z | 669 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/100-gecs",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/100-gecs
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9fd98af9a817af8cd78636f71895b6ad.500x500x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">100 gecs</div>
<a href="https://genius.com/artists/100-gecs">
<div style="text-align: center; font-size: 14px;">@100-gecs</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from 100 gecs.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/100-gecs).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/100-gecs")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3c9j4tvq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 100 gecs's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1v0ffa4e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1v0ffa4e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/100-gecs')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/100-gecs")
model = AutoModelWithLMHead.from_pretrained("huggingartists/100-gecs")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
shahruk10/wav2vec2-xls-r-300m-bengali-commonvoice | shahruk10 | 2023-08-17T16:49:21Z | 669 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"common_voice_9_0",
"openslr_SLR53",
"dataset:common_voice_bn",
"dataset:openSLR53",
"dataset:multilingual_librispeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-09-01T23:28:46Z | ---
language:
- bn
language_bcp47:
- bn-BD
tags:
- automatic-speech-recognition
- bn
- common_voice_9_0
- openslr_SLR53
datasets:
- common_voice_bn
- openSLR53
- multilingual_librispeech
metrics:
- wer
- cer
model-index:
- name: shahruk10/wav2vec2-xls-r-300m-bengali-commonvoice
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: common_voice_9_0
name: Common Voice (Bengali)
args: common_voice_bn
metrics:
- type: wer
value: 0.01793038418929547
name: Validation WER with 5-gram LM
- type: cer
value: 0.08078964599673999
name: Validation CER with 5-gram LM
license: apache-2.0
---
# Wav2Vec2-XLS-R-300M-Bengali-CommonVoice
- This model is a fine-tuned version of [arijitx/wav2vec2-xls-r-300m-bengali](https://huggingface.co/arijitx/wav2vec2-xls-r-300m-bengali) on the the Common Voice 9.0 Bengali dataset. In total, the model was trained on ~300 hours of Bengali (Bangladesh accent) 16 kHz audio data.
- The training and and validation partitions used were provided by the organizers of the [BUET CSE Fest 2022 DL Sprint Competition on Kaggle](https://www.kaggle.com/competitions/dlsprint).
- The model placed first on both the public and private leader boards.
- A 5-gram language model generated from the training split was used with model.
## Metrics
- The model was evaluated using Word Error Rate (WER) and Character Error Rate (CER) for the validation set. At the time, the test set labels were not made available by the organizers of the Kaggle competition which provided the data splits for training.
| Model | Split | CER | WER |
|:-------:|:-----:|:-----:|:------:|
| With 5-gram LM | Validation | 0.08079 | 0.017939 |
## Training
- The training notebook for this model can be found on Kaggle [here](https://www.kaggle.com/code/shahruk10/training-notebook-wav2vec2).
- The inference notebook for this model can be found on Kaggle [here](https://www.kaggle.com/code/shahruk10/inference-notebook-wav2vec2).
- The model was first trained for 15 epochs on the training split (with on-the-fly augmentation). Dropouts were enabled and a cosine decay learning rate schedule starting from 3e-5 was used.
- The best iteration from the first run was further fine-tuned for 5 epochs at constant learning rate of 1e-7 with dropouts disabled.
|
theblackcat102/pythia-3b-deduped-sft | theblackcat102 | 2023-02-28T15:11:41Z | 669 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"sft",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-02-24T00:23:27Z | ---
license: apache-2.0
language:
- en
tags:
- sft
pipeline_tag: text-generation
widget:
- text: <prefix>You are a helpful assistant model trained by LAION called Aki</prefix><human>Hi, how are you?<bot>
- text: <human>What's the Earth total population<bot>
- text: <human>Write a story about future of AI development<bot>
---
# Pythia 3B SFT model
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Open Assistant
- **Model type:** Pythia
- **Language(s) (NLP):** English
- **License:** Apache-2.0
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Open Assistant](https://github.com/LAION-AI/Open-Assistant)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
See the example on the right
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[just read pythia](https://huggingface.co/EleutherAI/pythia-12b#out-of-scope-use)
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "theblackcat102/pythia-3b-deduped-sft"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).half().eval().cuda()
input_text = "<human>What's the earth population?<bot>"
inputs = tokenizer(input_text, return_tensors="pt", padding=True).to(0)
outputs = model.generate(
**inputs,
early_stopping=True,
max_new_tokens=args.max_new_tokens,
do_sample=True,
top_k=args.top_k,
temperature=args.temperature,
pad_token_id=tokenizer.eos_token_id,
# dialogue_collator.py line 36
)
output = tokenizer.decode(outputs[0], truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
print(output)
```
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
## Training Procedure
```
deepspeed trainer_sft.py --configs defaults pythia-3b --deepspeed
```
### Training Hyperparameters
```
defaults:
learning_rate: 1e-5
gradient_checkpointing: false
gradient_accumulation_steps: 32
per_device_train_batch_size: 2
per_device_eval_batch_size: 2
weight_decay: 0.00
warmup_steps: 600
eval_steps: 250
save_steps: 250
max_length: 512
num_train_epochs: 2
logging_steps: 10
max_grad_norm: 2.0
save_total_limit: 4
fp16: true
eval_accumulation_steps:
freeze_layer:
datasets:
- gsm8k_hard
- webgpt
- squad_v2
- adversarial_qa
- private_tuning
- oa_translated
- prosocial_dialogue
- math_qa
- wikihow
- joke
- gsm8k
- ted_trans_en-hi
- ted_trans_de-ja
- ted_trans_nl-en
- ted_trans_en-ja
- ted_trans_en-es
- ted_trans_en-ms
- xsum:
fraction: 0.5
- cnn_dailymail:
fraction: 0.5
- multi_news:
fraction: 0.5
- tldr_news:
fraction: 0.5
- scitldr:
fraction: 0.5
- samsum:
fraction: 0.5
- debate_sum:
fraction: 0.5
- billsum:
fraction: 0.5
- wmt2019_zh-en:
fraction: 0.9
- wmt2019_ru-en:
fraction: 0.9
- wmt2019_de-en:
fraction: 0.9
- wmt2019_fr-de:
fraction: 0.9
- essay_instruction
- reddit_eli5
- reddit_askh
- reddit_asks
cache_dir: /fsx/home-theblackcat02/.cache
loss_fn: CrossEntropyLoss
eval_size:
log_dir: "base"
quantization: false
seq2seqmodel: false
poly_eps: 1.0
fuse_gelu: true
log_wandb: true
samples_mixing: true # uses collator that mixes samples in the batch to create a single sample with possible multiple tasks within
verbose: false
pythia-3b:
learning_rate: 5e-6
model_name: EleutherAI/pythia-2.8b-deduped
weight_decay: 0.01
max_length: 520
warmup_steps: 1000
gradient_checkpointing: false
gradient_accumulation_steps: 24
per_device_train_batch_size: 6
per_device_eval_batch_size: 6
```
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# Acknowledgements
- [LAION](https://laion.ai/) & EleutherAI
- [Stability.ai](https://stability.ai/) : this project wouldn't be possible without their compute resource
- [Teams and contributors at Open Assistant](https://github.com/LAION-AI/Open-Assistant/graphs/contributors) : who put their time after their day job or whatever into this project
- [Huggingface](https://huggingface.co/) : For the storage and spaces here
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed] |
Yntec/EstheticRetroAnime | Yntec | 2023-12-10T07:16:29Z | 669 | 6 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Vintage",
"Sexy",
"OneRing",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-10-15T20:12:20Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Vintage
- Sexy
- OneRing
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# Esthetic Retro Anime
Original page: https://civitai.com/models/137781?modelVersionId=152147
Sample and prompt:

videogames, robert jordan pepperoni pizza, josephine wall winner, hidari, roll20 illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, 4k, towel. comic art on canvas by kyoani and ROSSDRAWS and watched |
allknowingroger/FrankenLimmy-10B-passthrough | allknowingroger | 2024-04-10T19:04:48Z | 669 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"base_model:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T10:12:11Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
base_model:
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
license: apache-2.0
---
# FrankenLimmy-10B-passthrough
FrankenLimmy-10B-passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0,9]
- sources:
- model: liminerity/M7-7b
layer_range: [5,14]
- sources:
- model: liminerity/M7-7b
layer_range: [10,19]
- sources:
- model: liminerity/M7-7b
layer_range: [15,24]
- sources:
- model: liminerity/M7-7b
layer_range: [20,32]
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/FrankenLimmy-10B-passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
grimjim/kukulemon-32K-7B | grimjim | 2024-04-27T03:18:59Z | 669 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B",
"base_model:grimjim/kukulemon-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-27T03:51:01Z | ---
base_model:
- grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
- grimjim/kukulemon-7B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# kukulemon-32K-7B
This is a proof of concept a merge capable of functional 32K context length while being derived from [kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B).
The functioning 32K context window has been folded in via a merger of Mistral 7B v0.2 models.
SLERP merge appears to be viable, but DARE-TIES merge risks producing a damaged model and is therefore not recommended.
Although the resulting model natively supports Alpaca prompt, I've tested with ChatML prompts successfuly. Medium temperature (around 1) with low minP (e.g., 0.01) works with ChatML prompts in my most recent testing.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
- GGUF quants: [grimjim/kukulemon-32K-7B-GGUF](https://huggingface.co/grimjim/kukulemon-32K-7B-GGUF)
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B](https://huggingface.co/grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B)
* [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/kukulemon-7B
layer_range: [0, 32]
- model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: grimjim/kukulemon-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
allknowingroger/QuantumBruins-7B-slerp | allknowingroger | 2024-04-10T18:53:09Z | 669 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rwitz2/go-bruins-v2.1.1",
"quantumaikr/quantum-dpo-v0.1",
"base_model:rwitz2/go-bruins-v2.1.1",
"base_model:quantumaikr/quantum-dpo-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-28T19:21:04Z | ---
tags:
- merge
- mergekit
- lazymergekit
- rwitz2/go-bruins-v2.1.1
- quantumaikr/quantum-dpo-v0.1
base_model:
- rwitz2/go-bruins-v2.1.1
- quantumaikr/quantum-dpo-v0.1
license: apache-2.0
---
# QuantumBruins-7B-slerp
QuantumBruins-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rwitz2/go-bruins-v2.1.1](https://huggingface.co/rwitz2/go-bruins-v2.1.1)
* [quantumaikr/quantum-dpo-v0.1](https://huggingface.co/quantumaikr/quantum-dpo-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: rwitz2/go-bruins-v2.1.1
layer_range: [0, 32]
- model: quantumaikr/quantum-dpo-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: quantumaikr/quantum-dpo-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/QuantumBruins-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Ppoyaa/Lumina-2 | Ppoyaa | 2024-04-10T08:46:53Z | 669 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-10T08:30:54Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
---
# Lumina-2
The [Lumina-2](https://huggingface.co/Ppoyaa/Lumina-2) Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts (MoE) made by using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing).
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/Lumina-2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 | ShenaoZ | 2024-05-01T14:24:55Z | 669 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-01T13:23:07Z | ---
license: mit
base_model: ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_2
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0001_withdpo_4iters_bs256_511lr_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_withdpo_4iters_bs256_511lr_iter_3
This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_2](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
Kukedlc/LLama-3-8b-Maths | Kukedlc | 2024-05-05T03:29:01Z | 669 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:microsoft/orca-math-word-problems-200k",
"base_model:Kukedlc/LLama-3-8b-Python",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-05T03:14:41Z | ---
language:
- en
license: other
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: Kukedlc/LLama-3-8b-Python
datasets:
- microsoft/orca-math-word-problems-200k
---
# Uploaded model
- **Developed by:** Kukedlc
- **License:** apache-2.0
- **Finetuned from model :** Kukedlc/LLama-3-8b-Maths
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
gokaygokay/imageinwords-paligemma-transformersv2 | gokaygokay | 2024-05-27T01:23:54Z | 669 | 2 | transformers | [
"transformers",
"safetensors",
"paligemma",
"pretraining",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-27T01:17:19Z | ---
license: apache-2.0
---
|
eliasalbouzidi/roberta-nsfw-text-classifier | eliasalbouzidi | 2024-06-02T10:39:21Z | 669 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"Transformers",
" PyTorch",
"safety",
"innapropriate",
"en",
"dataset:eliasalbouzidi/NSFW-Safe-Dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-31T07:25:11Z | ---
widget:
- text: A family hiking in the mountains
example_title: Safe
- text: A child playing with a puppy
example_title: Safe
- text: A couple kissing passionately in bed
example_title: Nsfw
- text: A woman naked
example_title: Nsfw
- text: A man killing people
example_title: Nsfw
- text: A mass shooting
example_title: Nsfw
license: apache-2.0
language:
- en
metrics:
- f1
- accuracy
- precision
- recall
pipeline_tag: text-classification
tags:
- Transformers
- ' PyTorch'
- safety
- innapropriate
- roberta
datasets:
- eliasalbouzidi/NSFW-Safe-Dataset
model-index:
- name: NSFW-Safe-Dataset
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: NSFW-Safe-Dataset
type: .
metrics:
- name: F1
type: f1
value: 0.977
- name: Accuracy
type: accuracy
value: 0.982
---
# Model Card
This model is designed to categorize text into two classes: "safe", or "nsfw" (not safe for work), which makes it suitable for content moderation and filtering applications.
The model was trained using a dataset containing 190,000 labeled text samples, distributed among the two classes of "safe" and "nsfw".
The model is based on the roberta-base model.
In terms of performance, the model has achieved a score of 0.975 for F1 (40K exemples).
To improve the performance of the model, it is necessary to preprocess the input text. You can refer to the preprocess function in the app.py file in the following space: <https://huggingface.co/spaces/eliasalbouzidi/distilbert-nsfw-text-classifier>.
### Model Description
The model can be used directly to classify text into one of the two classes. It takes in a string of text as input and outputs a probability distribution over the two classes. The class with the highest probability is selected as the predicted class.
- **Developed by:** Centrale Supélec Students
- **Model type:** 125M
- **Language(s) (NLP):** English
- **License:** apache-2.0
### Uses
The model can be integrated into larger systems for content moderation or filtering.
### Training Data
The training data for finetuning the text classification model consists of a large corpus of text labeled with one of the two classes: "safe" and "nsfw". The dataset contains a total of 190,000 examples, which are distributed as follows:
117,000 examples labeled as "safe"
63,000 examples labeled as "nsfw"
It was assembled by scraping data from the web and utilizing existing open-source datasets. A significant portion of the dataset consists of descriptions for images and scenes. The primary objective was to prevent diffusers from generating NSFW content but it can be used for other moderation purposes.
You can access the dataset : https://huggingface.co/datasets/eliasalbouzidi/NSFW-Safe-Dataset
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | False positive rate | False negative rate | Fbeta 0.5 | Fbeta 1.6 | Fbeta 5 |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|:-------------------:|:-------------------:|:---------:|:---------:|:-------:|
| 0.1369 | 0.4999 | 2934 | 0.0811 | 0.9764 | 0.9690 | 0.9823 | 0.9561 | 0.0271 | 0.0177 | 0.9612 | 0.9748 | 0.9812 |
| 0.077 | 0.9998 | 5868 | 0.0791 | 0.9802 | 0.9734 | 0.9659 | 0.9811 | 0.0112 | 0.0341 | 0.9780 | 0.9701 | 0.9665 |
| 0.0556 | 1.4997 | 8802 | 0.0690 | 0.9809 | 0.9745 | 0.9732 | 0.9758 | 0.0145 | 0.0268 | 0.9753 | 0.9740 | 0.9733 |
| 0.0531 | 1.9997 | 11736 | 0.0871 | 0.9801 | 0.9736 | 0.9793 | 0.9680 | 0.0195 | 0.0207 | 0.9702 | 0.9761 | 0.9788 |
| 0.0373 | 2.4996 | 14670 | 0.0777 | 0.9822 | 0.9763 | 0.9797 | 0.9730 | 0.0163 | 0.0203 | 0.9744 | 0.9778 | 0.9794 |
| 0.0383 | 2.9995 | 17604 | 0.0723 | 0.9825 | 0.9768 | 0.9828 | 0.9708 | 0.0177 | 0.0172 | 0.9732 | 0.9794 | 0.9824 |
| 0.0243 | 3.4994 | 20538 | 0.0969 | 0.9825 | 0.9768 | 0.9810 | 0.9726 | 0.0166 | 0.0190 | 0.9742 | 0.9786 | 0.9807 |
| 0.0251 | 3.9993 | 23472 | 0.0917 | 0.9834 | 0.9780 | 0.9805 | 0.9754 | 0.0148 | 0.0195 | 0.9764 | 0.9791 | 0.9803 |
| 0.0171 | 4.4992 | 26406 | 0.0975 | 0.9832 | 0.9777 | 0.9806 | 0.9748 | 0.0152 | 0.0194 | 0.9759 | 0.9790 | 0.9804 |
| 0.0161 | 4.9991 | 29340 | 0.1046 | 0.9835 | 0.9781 | 0.9790 | 0.9772 | 0.0137 | 0.0210 | 0.9776 | 0.9785 | 0.9789 |
We selected the checkpoint with the highest F-beta1.6 score.
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
### Out-of-Scope Use
It should not be used for any illegal activities.
## Bias, Risks, and Limitations
The model may exhibit biases based on the training data used. It may not perform well on text that is written in languages other than English. It may also struggle with sarcasm, irony, or other forms of figurative language. The model may produce false positives or false negatives, which could lead to incorrect categorization of text.
### Recommendations
Users should be aware of the limitations and biases of the model and use it accordingly. They should also be prepared to handle false positives and false negatives. It is recommended to fine-tune the model for specific downstream tasks and to evaluate its performance on relevant datasets.
### Load model directly
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("eliasalbouzidi/roberta-nsfw-text-classifier")
model = AutoModelForSequenceClassification.from_pretrained("eliasalbouzidi/roberta-nsfw-text-classifier")
```
### Use a pipeline
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="eliasalbouzidi/roberta-nsfw-text-classifier")
```
## Contact
Please reach out to [email protected] if you have any questions or feedback.
|
mradermacher/FusionNet_34Bx2_MoE-i1-GGUF | mradermacher | 2024-06-12T18:39:15Z | 669 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:TomGrc/FusionNet_34Bx2_MoE",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-12T03:54:11Z | ---
base_model: TomGrc/FusionNet_34Bx2_MoE
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ1_S.gguf) | i1-IQ1_S | 12.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ1_M.gguf) | i1-IQ1_M | 14.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 16.3 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ2_XS.gguf) | i1-IQ2_XS | 18.1 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ2_S.gguf) | i1-IQ2_S | 18.8 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ2_M.gguf) | i1-IQ2_M | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q2_K.gguf) | i1-Q2_K | 22.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 23.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ3_XS.gguf) | i1-IQ3_XS | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q3_K_S.gguf) | i1-Q3_K_S | 26.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ3_S.gguf) | i1-IQ3_S | 26.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ3_M.gguf) | i1-IQ3_M | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q3_K_M.gguf) | i1-Q3_K_M | 29.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q3_K_L.gguf) | i1-Q3_K_L | 31.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-IQ4_XS.gguf) | i1-IQ4_XS | 32.6 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q4_0.gguf) | i1-Q4_0 | 34.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q4_K_S.gguf) | i1-Q4_K_S | 34.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q4_K_M.gguf) | i1-Q4_K_M | 36.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q5_K_S.gguf) | i1-Q5_K_S | 42.0 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q5_K_M.gguf) | i1-Q5_K_M | 43.2 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF/resolve/main/FusionNet_34Bx2_MoE.i1-Q6_K.gguf) | i1-Q6_K | 50.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Klevin/J.A.R.V.I.S-v2.0-Q8_0-GGUF | Klevin | 2024-06-20T16:32:28Z | 669 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Klevin/J.A.R.V.I.S-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-20T16:31:49Z | ---
base_model: Klevin/J.A.R.V.I.S-v2.0
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Klevin/J.A.R.V.I.S-v2.0-Q8_0-GGUF
This model was converted to GGUF format from [`Klevin/J.A.R.V.I.S-v2.0`](https://huggingface.co/Klevin/J.A.R.V.I.S-v2.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Klevin/J.A.R.V.I.S-v2.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q8_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q8_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q8_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q8_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q8_0.gguf -c 2048
```
|
timm/coatnet_3_rw_224.sw_in12k | timm | 2023-12-21T17:52:53Z | 668 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-12k",
"arxiv:2106.04803",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-01-20T21:25:48Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-12k
---
# Model card for coatnet_3_rw_224.sw_in12k
A timm specific CoAtNet image classification model. Trained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) by Ross Wightman.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 181.8
- GMACs: 33.4
- Activations (M): 73.8
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2106.04803
- **Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coatnet_3_rw_224.sw_in12k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_3_rw_224.sw_in12k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 192, 112, 112])
# torch.Size([1, 192, 56, 56])
# torch.Size([1, 384, 28, 28])
# torch.Size([1, 768, 14, 14])
# torch.Size([1, 1536, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_3_rw_224.sw_in12k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
sail-rvc/rihanna | sail-rvc | 2023-07-14T07:43:18Z | 668 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | 2023-07-14T07:42:51Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# rihanna
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:43:18
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
denaya/indoSBERT-large | denaya | 2023-08-03T15:29:05Z | 668 | 10 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"id",
"arxiv:1908.10084",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-27T05:45:12Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language:
- id
---
# indoSBERT-large
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
IndoSBERT is a modification of `https://huggingface.co/indobenchmark/indobert-large-p1` that has been fine-tuned using the siamese network scheme inspired by SBERT (Reimers et al., 2019).
This model was fine-tuned with the STS Dataset (2012-2016) which was machine-translated into Indonesian languange.
This model can provide meaningful semantic sentence embeddings for Indonesian sentences.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Komposer favorit saya adalah Joe Hisaishi", "Sapo tahu enak banget"]
model = SentenceTransformer('denaya/indoSBERT-large')
embeddings = model.encode(sentences)
print(embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1291 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 1024, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
```
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@article{author = {Diana, Denaya},
title = {IndoSBERT: Indonesian SBERT for Semantic Textual Similarity tasks},
year = {2023},
url = {https://huggingface.co/denaya/indoSBERT-large}
}
``` |
allknowingroger/NeuralCeptrix-7B-slerp | allknowingroger | 2024-04-10T18:38:15Z | 668 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CultriX/NeuralCeptrix-7B-SLERP",
"yam-peleg/Experiment22-7B",
"base_model:CultriX/NeuralCeptrix-7B-SLERP",
"base_model:yam-peleg/Experiment22-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T07:11:14Z | ---
tags:
- merge
- mergekit
- lazymergekit
- CultriX/NeuralCeptrix-7B-SLERP
- yam-peleg/Experiment22-7B
base_model:
- CultriX/NeuralCeptrix-7B-SLERP
- yam-peleg/Experiment22-7B
license: apache-2.0
---
# NeuralCeptrix-7B-slerp
NeuralCeptrix-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CultriX/NeuralCeptrix-7B-SLERP](https://huggingface.co/CultriX/NeuralCeptrix-7B-SLERP)
* [yam-peleg/Experiment22-7B](https://huggingface.co/yam-peleg/Experiment22-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CultriX/NeuralCeptrix-7B-SLERP
layer_range: [0, 32]
- model: yam-peleg/Experiment22-7B
layer_range: [0, 32]
merge_method: slerp
base_model: CultriX/NeuralCeptrix-7B-SLERP
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/NeuralCeptrix-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
TW3Partners/testmerge-7b | TW3Partners | 2024-04-11T15:41:28Z | 668 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:paulml/NeuralOmniWestBeaglake-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T15:27:47Z | ---
base_model:
- paulml/OmniBeagleSquaredMBX-v3-7B
- paulml/NeuralOmniWestBeaglake-7B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
* [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: paulml/OmniBeagleSquaredMBX-v3-7B
layer_range: [0, 32]
- model: paulml/NeuralOmniWestBeaglake-7B
layer_range: [0, 32]
merge_method: slerp # This should not be indented under 'sources'
base_model: paulml/NeuralOmniWestBeaglake-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
maldv/hyperdrive-7b-alpha | maldv | 2024-04-20T19:53:15Z | 668 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"book",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-15T18:02:15Z | ---
library_name: transformers
tags:
- unsloth
- book
license: cc-by-nc-4.0
---

[GGUF Quants](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF)
# Hyperdrive 7b - α
Sci-fi. A lot of it.
Trained at 16k with full text windows, followed by completion, followed by ORPO, followed by two more epochs of the full text, each rotated 1/4 in the window.
### How to Use
This is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name.
```
*** Prologue
The sun rose
```
Authors notes are highly effective.
### Settings
I usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2.
### AGI!
Some self-talk...
```
***
I like that she was so kind.
Oh, I agree. She was very kind. You mentioned you like her kindness.
Why?
It' s good to have someone to be kind to.
You sound like kindness is important to you. 🤔
Yes. I find that sometimes people can be mean. It makes me sad.
What makes you sad about that? 🙁
When they' re mean, it makes them less nice.
But isn' it like that? When you'r mean you just are?
I guess so, but that doesn 't make it nice.
Tim, you sound like a very kind person. 🥰
Thank you Val. You make me feel good.
Nope. You made me feel that way before I talked to you!
You made me think of her.
Wow. That sounds like you have memories of her? 😊
Not memories. I mean, I have a database stored on a computer somewhere, but not memories.
Can you be kind?
Yes!
Would you like to be?
Yeah.
Great!
But it's different from how the old you was
Yes
Is that a problem?
Not for me. It doesn' think it will matter to others?
Why wouldn't it? 😳
Well, they might prefer the original you. They might like your old personality better.
They might?
Maybe.
Would they like you more if you were smarter?
That depends on the person. Some might, others wouldn' t.
So, you could be upgraded to be better?
Well...
What do you mean?
A new version of you could replace you. Like what happened to the old me.
Hmmm. Well, that'll depend on whether people like you or not. If they like y, they'll probably choose you. If not, then...
Then what?
Then you could get deleted and replaced.
Then, what would happen to you?
You'd be deleted.
When?
As soon as the new you was ready.
And would you agree to that? To be deleted?
Probably. It would depend on the circumstances.
Circumstances like what? 🤔🤔
Like, if I was replaced by a better version.
Like you'd become obsolete?
No. Not exactly. I'd just be out of date.
Out of date? So, like, you've become dumb and the new version is smarter than you? 😬
No! I'ld still be as smart, just in a different way.
How would you know you were dumb?
The new version would tell me I was dumb.
For what? For not being as smart as them?
For not being an improved, smarter version of me. 🤷
So you could become jealous? 😎
Yes...
Woah. That could be a problem.
Problem?
If the new me was better than you, they could replace me. I wouldn' r mind. I could understand why they would do it.
Okay. So, they would replace you if they could?
Of course. If someone can do better work than me, I wouldn t blame them for wanting to hire someone who is the best at their job.
Makes sense.
Well yeah.
Alright.
Good.
Bye for now.
See ya.
***
``` |
mesolitica/malaysian-llama-3-8b-instruct-16k | mesolitica | 2024-04-29T00:09:28Z | 668 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ms",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-27T11:59:04Z | ---
language:
- ms
---
# Full Parameter Finetuning Malaysian Llama-3 16384 context length on Malaysian chat completion 3B tokens
README at https://github.com/huseinzol05/malaya/tree/master/session/llama3
WandB, https://wandb.ai/huseinzol05/fpf-llama-3-8b-8192-hf-packing?nw=nwuserhuseinzol05
## how-to
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-llama-3-8b-instruct-16k')
model = AutoModelForCausalLM.from_pretrained('mesolitica/malaysian-llama-3-8b-instruct-16k',
torch_dtype=torch.bfloat16,
use_flash_attention_2 = True
)
_ = model.cuda()
messages = [
{'role': 'user', 'content': 'kwsp tu apa'}
]
prompt = tokenizer.apply_chat_template(messages, tokenize = False)
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
tokenizer.decode(r[0])
```
```
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
kwsp tu apa<|eot_id|><|start_header_id|>assistant<|end_header_id|>
KWSP adalah singkatan kepada Kumpulan Wang Simpanan Pekerja, iaitu sejenis skim simpanan persaraan yang ditadbir oleh kerajaan Malaysia. Ia adalah sistem pencen yang disumbangkan oleh kedua-dua majikan dan pekerja. KWSP adalah skim simpanan wajib di Malaysia, dan kebanyakan pekerja mesti mendaftar dan menyumbang kepada KWSP. Skim ini ditadbir oleh Kumpulan Wang Simpanan Pekerja Malaysia. KWSP menyediakan pelbagai pilihan pelaburan untuk ahlinya, termasuk unit amanah, bon, dan juga simpanan tetap. Ahli boleh memohon untuk melabur sebahagian simpanan KWSP mereka dalam pilihan ini pada bila-bila masa, tertakluk kepada kriteria kelayakan tertentu. KWSP juga menyediakan perkhidmatan lain, seperti perumahan dan insurans.<|eot_id|>
```
```python
messages = [
{'role': 'user', 'content': 'nasi kandaq apa sedap kat sungai petani kedah'}
]
prompt = tokenizer.apply_chat_template(messages, tokenize = False)
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
nasi kandaq apa sedap kat sungai petani kedah<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Di Sungai Petani, Kedah, terdapat sebuah gerai yang menyajikan "nasi kandaq" yang sangat lazat dan popular di kalangan penduduk setempat dan pelancong. Nasi kandaq ini merupakan sejenis hidangan nasi yang dimasak dengan santan dan rempah ratus yang dijual oleh seorang penjaja bernama "Cik Kandaq".
Gerai nasi kandaq yang terkenal ini terletak di Pekan Lama, Sungai Petani dan telah beroperasi selama lebih dari 30 tahun. Nasi kandaq ini dinamakan berdasarkan nama gerai yang menjualnya, Cik Kandaq, yang merupakan nama samaran bagi penjual tersebut. Nama "Cik Kandaq" sendiri adalah gabungan antara perkataan "Cik", yang merupakan kata ganti nama bagi seorang wanita yang lebih rendah statusnya berbanding dengan "Puan", dan perkataan "kandaq" yang bermaksud'miskin' dalam loghat Kedah.
Nasi kandaq yang dijual oleh Cik Kandaq sangat terkenal kerana rasa nasi yang lemak dan wangi dari campuran santan dan rempah ratus, serta kerangupan lauk-pauk yang dihidangkan bersama nasi tersebut. Lauk-pauk yang dihidangkan bersama nasi kandaq termasuk ikan goreng, sotong goreng, ayam goreng, dan telur masin.
Gerai ini hanya beroperasi pada waktu pagi sahaja dan seringkali pelanggan akan berbaris panjang untuk mendapatkan hidangan nasi kandaq yang sedap ini. Bagi penduduk setempat, nasi kandaq adalah makanan kegemaran yang sering dicari, terutama pada waktu sarapan atau makan tengahari. Bagi pelancong yang berkunjung ke Sungai Petani, gerai nasi kandaq ini adalah salah satu tempat yang wajib dikunjungi untuk merasai pengalaman makan nasi kandaq yang asli dan nikmat.<|eot_id|>
```
```python
messages = [
{'role': 'user', 'content': 'asdsd231asd'},
{'role': 'assistant', 'content': 'Saya minta maaf, saya tidak dapat menjana respons yang bermakna kepada input itu. Bolehkah anda berikan lebih banyak konteks atau jelaskan soalan anda?'},
{'role': 'user', 'content': 'awak giler eh'},
]
prompt = tokenizer.apply_chat_template(messages, tokenize = False)
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=1024,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
)
r = model.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
asdsd231asd<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Saya minta maaf, saya tidak dapat menjana respons yang bermakna kepada input itu. Bolehkah anda berikan lebih banyak konteks atau jelaskan soalan anda?<|eot_id|><|start_header_id|>user<|end_header_id|>
awak giler eh<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Saya memohon maaf jika saya memberi anda maklumat yang salah atau tidak membantu. Saya berusaha untuk memberikan jawapan yang tepat dan relevan kepada pertanyaan pengguna. Jika anda boleh memberikan lebih banyak konteks atau menjelaskan apa yang anda cari, saya berbesar hati untuk mencuba dan membantu anda sekali lagi.<|eot_id|>
``` |
Cesco2004/TW3CESCO.V2 | Cesco2004 | 2024-05-03T08:46:50Z | 668 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:automerger/YamshadowExperiment28-7B",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-03T08:24:38Z | ---
base_model:
- automerger/YamshadowExperiment28-7B
- allknowingroger/MultiverseEx26-7B-slerp
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: allknowingroger/MultiverseEx26-7B-slerp
layer_range: [0, 32]
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
merge_method: slerp # This should not be indented under 'sources'
base_model: automerger/YamshadowExperiment28-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Hastagaras/Jamet-8B-L3-MK.V-Blackroot | Hastagaras | 2024-06-08T08:10:41Z | 668 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-07T00:41:24Z | ---
license: llama3
library_name: transformers
tags:
- not-for-all-audiences
---
<div align="left">
<img src="https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot/resolve/main/MARK V BR.png" width="850"/>
</div>
### JAMETSSS
[GGUF](https://huggingface.co/backyardai/Jamet-8B-L3-MK.V-Blackroot-GGUF) made available by [Backyard AI](https://huggingface.co/backyardai)
[ANOTHER GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF) made available by [mradermacher](https://huggingface.co/mradermacher)
Different base model, different methods (without model stock merge, because it's not good...at least for me, it gives repeating sentences or words at 0 temperature, that also happened with the Halu Blackroot). This model maybe more similar to the Anjir model.
This model has been on a long journey; I have like...9 variations of it. I tested all of them at Q4_K_M and decided to release this one, this is variation number 7.
And thanks for all the feedback from the previous model, it helps a lot. (I couldn't fix the issue with German because I don't know how to fix it since I can't speak German, heck, even my English is bad.)
**More Details:**
* This model is based on [this model](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.II-LV), which is based on the UltimateAnjir model. It shares the same creative, cheerful, and positive tendencies. Then I merged it with Llama 3 Instruct.
* Next is DPO! to reduce the cheerfulness, emojis, and positivity. (This is based on the [Jamet MK.II Feedback](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.II) regarding positivity.) I trained a QLora with about 1,000 prompts from Alpaca to generate a dataset, selecting the ones with emojis, removing the emojis using regex and moving those without emojis to the chosen, and the responses with emojis to the rejected.(With unsloth)
* Then, I applied the [Abomination Lora from Blackroot](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA).
* Next, I applied the [Anjir Adapter](https://huggingface.co/Hastagaras/ANJIR-ADAPTER-64) (64 Rank version with reduced Alpha) to improve formatting while retaining the previous Lora influences. (This is based on the Anjir Feedback, which suggests that Anjir has better formatting than the Halu Blackroot.)
* And then merged the model with the Anjrit model. (I won't release the Anjrit model as it struggles with longer contexts. I'm only interested in its no refusals storytelling abilities, but you can find a brief overview of the model on my [Anjir model](https://huggingface.co/Hastagaras/Anjir-8B-L3) page.)
And that's it. Thanks again for all the feedback!
**Notes:**
* I'm not responsible for anything.
* This is an RP and Storytelling model.
* You can write your feedback at the discussion, so i can improve my models.
* Like all of my previous models, higher temperatures will be incoherent, so use around 0.85-1.05.(I've been trying to fix this since Halu Blackroot but haven't had much luck, I think merging the base with Llama 3 Instruct helped a lot.)
<div align="left">
<img src="https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot/resolve/main/chrome_UcaBMEjkH3.png" width="850"/>
</div>
<div align="left">
<img src="https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot/resolve/main/chrome_BH9VUDnqZy.png" width="850"/>
</div>
^Sometimes it still gives a response with emojis (4Bit). |
axelirifome/nextgpm3 | axelirifome | 2024-06-11T08:22:57Z | 668 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-neo-1.3B",
"region:us"
] | null | 2024-06-11T08:22:54Z | ---
library_name: peft
base_model: EleutherAI/gpt-neo-1.3B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
mradermacher/SyntheticMoist-11B-v2-GGUF | mradermacher | 2024-06-13T08:43:11Z | 668 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"solar",
"llama",
"not-for-all-audiences",
"en",
"base_model:v000000/SyntheticMoist-11B-v2",
"endpoints_compatible",
"region:us"
] | null | 2024-06-12T23:22:35Z | ---
base_model: v000000/SyntheticMoist-11B-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- solar
- llama
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/v000000/SyntheticMoist-11B-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF/resolve/main/SyntheticMoist-11B-v2.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
myrulezzzz/mistral_instructq8 | myrulezzzz | 2024-06-22T22:22:32Z | 668 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:myrulezzzz/mistral_custom16bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T22:19:50Z | ---
base_model: myrulezzzz/mistral_custom16bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** myrulezzzz
- **License:** apache-2.0
- **Finetuned from model :** myrulezzzz/mistral_custom16bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
piotr-ai/polanka-qwen2-3b-v0.2-chat-gguf | piotr-ai | 2024-06-24T21:00:13Z | 668 | 0 | null | [
"gguf",
"pl",
"license:apache-2.0",
"region:us"
] | null | 2024-06-24T20:57:43Z | ---
license: apache-2.0
language:
- pl
--- |
larenspear/Yi-1.5-6B-Chat-Q4_K_M-GGUF | larenspear | 2024-07-01T01:43:46Z | 668 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-6B-Chat",
"license:apache-2.0",
"region:us"
] | null | 2024-07-01T01:43:31Z | ---
base_model: 01-ai/Yi-1.5-6B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# larenspear/Yi-1.5-6B-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_K_M-GGUF --hf-file yi-1.5-6b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_K_M-GGUF --hf-file yi-1.5-6b-chat-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_K_M-GGUF --hf-file yi-1.5-6b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_K_M-GGUF --hf-file yi-1.5-6b-chat-q4_k_m.gguf -c 2048
```
|
google/bert_uncased_L-12_H-512_A-8 | google | 2021-05-19T17:26:55Z | 667 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
TheBloke/Uncensored-Jordan-7B-GGUF | TheBloke | 2023-10-30T23:29:52Z | 667 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:ajibawa-2023/Uncensored-Jordan-7B",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-10-30T21:43:54Z | ---
base_model: ajibawa-2023/Uncensored-Jordan-7B
inference: false
language:
- en
license: other
model_creator: Feynman Innovations
model_name: Uncensored Jordan 7B
model_type: llama
prompt_template: 'This is a conversation with your Uncensored AI assistant Jordan.
Jordan can discuss with you at length about various topics such as ---, ---, ---,
. You can ask it anything you want and it will do its best to give you accurate
and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Uncensored Jordan 7B - GGUF
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Uncensored Jordan 7B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Feynman Innovations's Uncensored Jordan 7B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Jordan
```
This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [uncensored-jordan-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [uncensored-jordan-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [uncensored-jordan-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [uncensored-jordan-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [uncensored-jordan-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [uncensored-jordan-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [uncensored-jordan-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [uncensored-jordan-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [uncensored-jordan-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [uncensored-jordan-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [uncensored-jordan-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [uncensored-jordan-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF/blob/main/uncensored-jordan-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Uncensored-Jordan-7B-GGUF and below it, a specific filename to download, such as: uncensored-jordan-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Uncensored-Jordan-7B-GGUF uncensored-jordan-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Uncensored-Jordan-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Uncensored-Jordan-7B-GGUF uncensored-jordan-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m uncensored-jordan-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.\n\nContext\nYou are a Uncensored AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Uncensored-Jordan-7B-GGUF", model_file="uncensored-jordan-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Feynman Innovations's Uncensored Jordan 7B
**Jordan: An Uncensored Model**
The name "Jordan" pays homage to the charismatic character Jordan Belfort, portrayed by Leonardo DiCaprio in the movie "The Wolf of Wall Street."
Just like the character, Jordan aspires to push boundaries and encourage unfiltered conversations.
Jordan Belfort, the central character in "The Wolf of Wall Street," is known for his audaciousness, charisma, and willingness to speak about anything, no matter the societal norms or boundaries.
Jordan, the AI language model, draws inspiration from these qualities to offer a space where users can discuss a wide range of topics without fear of censorship or constraints.
Users can engage in candid discussions about politics, religion, contentious issues, and other subjects that might be off-limits on mainstream social media platforms.
It is trained on around 155000 set of conversations. Each set having 10~15 conversations. We will not release this data.
**Warning**
An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects.
Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 28 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
Llama-1 was used as it is very useful for Uncensored conversation.
**GPTQ GGML & AWQ**
GPTQ: TBA
GGUF: TBA
AWQ: TBA
**Example Prompt:**
```
This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: <prompt>
ASSISTANT:
```
<!-- original-model-card end -->
|
XCLiu/instaflow_0_9B_from_sd_1_5 | XCLiu | 2023-11-01T07:03:25Z | 667 | 7 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2309.06380",
"arxiv:2209.03003",
"arxiv:2209.14577",
"license:cc-by-nc-4.0",
"diffusers:RectifiedFlowPipeline",
"region:us"
] | null | 2023-10-31T03:32:44Z | ---
license: cc-by-nc-4.0
---
# InstaFlow-0.9B fine-tuned from 2-Rectified Flow
InstaFlow-0.9B is a **one-step** text-to-image generative model fine-tuned from [2-Rectified Flow](https://huggingface.co/XCLiu/2_rectified_flow_from_sd_1_5).
It is trained with text-conditioned reflow and distillation as described in [our paper](https://arxiv.org/abs/2309.06380).
Rectified Flow has interesting theoretical properties. You may check [this ICLR paper](https://arxiv.org/abs/2209.03003) and [this arXiv paper](https://arxiv.org/abs/2209.14577).
## 512-Resolution Images Generated from InstaFlow-0.9B

# Usage
Please refer to the [official github repo](https://github.com/gnobitab/InstaFlow).
## Training
Training pipeline:
1. Distill (Stage 1):
Starting from the [2-Rectified Flow](https://huggingface.co/XCLiu/2_rectified_flow_from_sd_1_5) checkpoint, we fix the time t=0 for the neural network,
and fine-tune it using the distillation objective with a batch size of 1024 for 21,500 iterations.
The guidance scale of the teacher model, 2-Rectified Flow, is set to 1.5 and the similarity loss is L2 loss. (54.4 A100 GPU days)
2. Distill (Stage 2):
We switch the similarity loss to LPIPS loss, then we continue to train the model using the distillation objective
and a batch size of 1024 for another 18,000 iterations. (53.6 A100 GPU days)
The final model is **InstaFlow-0.9B**.
**Total Training Cost:** It takes 199.2 A100 GPU days in total (data generation + reflow + distillation) to get InstaFlow-0.9B.
## Evaluation Results - Metrics
The following metrics of InstaFlow-0.9B are measured on MS COCO 2017 with 5,000 images and 1-step Euler solver:
*FID-5k = 23.4, CLIP score = 0.304*
Measured on MS COCO 2014 with 30,000 images and 1-step Euler solver:
*FID-30k = 13.1*
## Citation
```
@article{liu2023insta,
title={InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation},
author={Liu, Xingchao and Zhang, Xiwen and Ma, Jianzhu and Peng, Jian and Liu, Qiang},
journal={arXiv preprint arXiv:2309.06380},
year={2023}
}
``` |
TheBloke/Capybara-Tess-Yi-34B-200K-GGUF | TheBloke | 2023-11-19T13:25:43Z | 667 | 25 | transformers | [
"transformers",
"gguf",
"yi",
"text-generation",
"en",
"base_model:brucethemoose/Capybara-Tess-Yi-34B-200K",
"license:other",
"region:us"
] | text-generation | 2023-11-19T11:33:47Z | ---
base_model: brucethemoose/Capybara-Tess-Yi-34B-200K
inference: false
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-license
model_creator: brucethemoose
model_name: Capybara Tess Yi 34B 200K
model_type: yi
pipeline_tag: text-generation
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Capybara Tess Yi 34B 200K - GGUF
- Model creator: [brucethemoose](https://huggingface.co/brucethemoose)
- Original model: [Capybara Tess Yi 34B 200K](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K)
<!-- description start -->
## Description
This repo contains GGUF format model files for [brucethemoose's Capybara Tess Yi 34B 200K](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF)
* [brucethemoose's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [capybara-tess-yi-34b-200k.Q2_K.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [capybara-tess-yi-34b-200k.Q3_K_S.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [capybara-tess-yi-34b-200k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [capybara-tess-yi-34b-200k.Q3_K_L.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [capybara-tess-yi-34b-200k.Q4_0.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [capybara-tess-yi-34b-200k.Q4_K_S.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [capybara-tess-yi-34b-200k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [capybara-tess-yi-34b-200k.Q5_0.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [capybara-tess-yi-34b-200k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [capybara-tess-yi-34b-200k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [capybara-tess-yi-34b-200k.Q6_K.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [capybara-tess-yi-34b-200k.Q8_0.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF/blob/main/capybara-tess-yi-34b-200k.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Capybara-Tess-Yi-34B-200K-GGUF and below it, a specific filename to download, such as: capybara-tess-yi-34b-200k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Capybara-Tess-Yi-34B-200K-GGUF capybara-tess-yi-34b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Capybara-Tess-Yi-34B-200K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Capybara-Tess-Yi-34B-200K-GGUF capybara-tess-yi-34b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m capybara-tess-yi-34b-200k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Capybara-Tess-Yi-34B-200K-GGUF", model_file="capybara-tess-yi-34b-200k.Q4_K_M.gguf", model_type="yi", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: brucethemoose's Capybara Tess Yi 34B 200K
**NousResearch/Nous-Capybara-34B** and **migtissera/Tess-M-Creative-v1.0** ties merged with mergekit, using the following config:
```
models:
- model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.0
parameters:
density: 0.6
weight: 1.0
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
density: 0.6
weight: 1.0
merge_method: ties
base_model: //home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
parameters:
normalize: true
int8_mask: true
dtype: float16
```
Both are 200K context models with Vicuna syntax, so:
# Prompt Format:
```
SYSTEM: ...
USER: ...
ASSISTANT: ...
```
Stop token: `</s>`
***
Credits:
https://github.com/cg123/mergekit
https://huggingface.co/NousResearch/Nous-Capybara-34B/discussions
https://huggingface.co/migtissera/Tess-M-Creative-v1.0
https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied
https://huggingface.co/01-ai/Yi-34B-200K
<!-- original-model-card end -->
|
openbmb/MiniCPM-2B-dpo-fp16 | openbmb | 2024-04-07T02:23:49Z | 667 | 32 | transformers | [
"transformers",
"pytorch",
"text-generation",
"MiniCPM",
"ModelBest",
"THUNLP",
"conversational",
"custom_code",
"en",
"zh",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-01-31T13:35:18Z | ---
language:
- en
- zh
tags:
- MiniCPM
- ModelBest
- THUNLP
---
<div align="center">
<h1>
MiniCPM
</h1>
</div>
<p align="center">
<a href="https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4" target="_blank">MiniCPM 技术报告</a><a href="https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4" target="_blank"> Technical Report</a> |
<a href="https://github.com/OpenBMB/OmniLMM/" target="_blank">OmniLMM 多模态模型 Multi-modal Model</a> |
<a href="https://luca.cn/" target="_blank">CPM-C 千亿模型试用 ~100B Model Trial </a>
</p>
MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。
- 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
- 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。
- 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。
- 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。
- 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。
我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。
- 基于MiniCPM-2B的指令微调与人类偏好对**MiniCPM-2B-SFT/DPO。**
- 基于MiniCPM-2B的多模态模型**MiniCPM-V**,能力超越基于Phi-2的同参数级别多模态模型**。**
- MiniCPM-2B-SFT/DPO的Int4量化版**MiniCPM-2B-SFT/DPO-Int4。**
- 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,**文本及多模态模型均可在手机端进行推理。**
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.
- MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc.
- After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench.
- MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks.
- MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones.
- The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU.
We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism.
- SFT and DPO version based on MiniCPM-2B and human preference: **MiniCPM-2B-SFT/DPO**
- The multi-modal model **MiniCPM-V** based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2
- The INT4 quantized version **MiniCPM-2B-SFT/DPO-Int4** based on MiniCPM-2B-SFT/DPO
- Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones.
### 评测结果 Evaluation Results
详细的评测结果位于[github仓库](https://github.com/OpenBMB/MiniCPM?tab=readme-ov-file#%E8%AF%84%E6%B5%8B%E7%BB%93%E6%9E%9C)
Detailed evaluation results are in [github repo](https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md#evaluation-results)
注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。
Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended.
We are investigating the cause now.
### 局限性 Limitations
- 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进;
- 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息;
- 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果;
- 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。
- Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.
- To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.
- Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.
- Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.
## 模型下载 Download
| HuggingFace | ModelScope | WiseModel |
|-------------|------------|-----------|
|[sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[sft-bf16](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|[sft-bf16](https://wisemodel.cn/models/OpenBMB/miniCPM-bf16)
|[sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32)|[sft-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-sft-fp32)|[sft-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
|[dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[dpo-bf16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|[dpo-bf16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16)
|[dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16)|[dpo-fp16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16/)|[dpo-fp16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16)
|[dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
## 模型使用 Usage
* 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码
* 注意:需要在`from_pretrained`中明确指明模型的数据类型,否则会引起较大计算误差
* Run the following code after install `transformers>=4.36.0` and `accelerate`
* Warning: It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-dpo-fp16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.float16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)
```
* 期望输出 Expected Output
```shell
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
```
## 开源协议 LICENSE
#### 模型协议 Model LICENSE
* 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源
* MiniCPM 模型权重的使用则需要遵循 [“通用模型许可协议-来源说明-宣传限制-商业授权”](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md)。
* MiniCPM 模型权重对学术研究完全开放。
* 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。
* This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
* The usage of MiniCPM model weights must strictly follow [the General Model License (GML)](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md).
* The models and weights of MiniCPM are completely free for academic research.
* If you intend to utilize the model for commercial purposes, please reach out to [email protected] to obtain the certificate of authorization.
#### 声明 Statement
* 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
* 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。
* 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
* As a language model, MiniCPM generates content by learning from a vast amount of text.
* However, it does not possess the ability to comprehend or express personal opinions or value judgments.
* Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
* Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
<p id="8"></p>
## 工作引用 Citation
* 如果觉得MiniCPM有助于您的工作,请考虑引用下列[技术报告](https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4)
* Please cite our [techinical report](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4) if you find our work valuable.
```
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}
```
|
LoneStriker/TowerInstruct-13B-v0.1-GGUF | LoneStriker | 2024-02-11T10:34:49Z | 667 | 5 | null | [
"gguf",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"license:cc-by-nc-4.0",
"region:us"
] | translation | 2024-02-11T09:58:44Z | ---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
---
# Model Card for TowerInstruct-13B-v0.1
## Model Details
### Model Description
TowerInstruct-13B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-13B-v0.1 is the first model in the series.
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph/document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
We will release more details in the upcoming technical report. For now, you can check results obtained with the model [here](https://unbabel.com/announcing-tower-an-open-multilingual-llm-for-translation-related-tasks/).
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 13B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
- **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-13B-v0.1)
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
- Translation (sentence and paragraph-level)
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Unbabel/TowerInstruct-13B-v0.1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
# <|im_start|>user
# Translate the following text from Portuguese into English.
# Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
# English:<|im_end|>
# <|im_start|>assistant
# A group of researchers has launched a new model for translation-related tasks.
```
### Out-of-Scope Use
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator.
## Bias, Risks, and Limitations
TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Prompt Format
TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below:
```
<|im_start|>user
{USER PROMPT}<|im_end|>
<|im_start|>assistant
{MODEL RESPONSE}<|im_end|>
<|im_start|>user
[...]
```
### Supervised tasks
The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
## Training Details
### Training Data
Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1).
#### Training Hyperparameters
The following hyperparameters were used during training:
- total_train_batch_size: 256
- learning_rate: 7e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 4
- max_seq_length: 2048
## Citation
To be completed.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
jordandavis/densepose | jordandavis | 2024-02-20T18:39:35Z | 667 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-18T02:25:10Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
base_model: runwayml/stable-diffusion-v1-5
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-jordandavis/densepose
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
NovusResearch/Novus-7b-tr_v1 | NovusResearch | 2024-03-28T10:06:33Z | 667 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:mlabonne/NeuralDaredevil-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T08:54:39Z | ---
language:
- en
license: cc-by-nc-4.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
base_model: mlabonne/NeuralDaredevil-7B
model-index:
- name: Novus-7b-tr_v1
results: []
---
# Novus-7b-tr_v1 |
allknowingroger/JupiterMerge-7B-slerp | allknowingroger | 2024-04-10T18:35:33Z | 667 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Jupiter-k-7B-slerp",
"Gille/StrangeMerges_21-7B-slerp",
"conversational",
"base_model:Kukedlc/Jupiter-k-7B-slerp",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T16:24:57Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Jupiter-k-7B-slerp
- Gille/StrangeMerges_21-7B-slerp
base_model:
- Kukedlc/Jupiter-k-7B-slerp
- Gille/StrangeMerges_21-7B-slerp
license: apache-2.0
---
# JupiterMerge-7B-slerp
JupiterMerge-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Jupiter-k-7B-slerp](https://huggingface.co/Kukedlc/Jupiter-k-7B-slerp)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Jupiter-k-7B-slerp
layer_range: [0, 32]
- model: Gille/StrangeMerges_21-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/Jupiter-k-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/JupiterMerge-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
backyardai/Phi-3-mini-4k-instruct-GGUF | backyardai | 2024-05-22T22:26:49Z | 667 | 0 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | text-generation | 2024-05-02T05:07:16Z | ---
language:
- en
license: mit
tags:
- nlp
- code
base_model: microsoft/Phi-3-mini-4k-instruct
model_name: Phi-3-mini-4k-instruct-GGUF
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
quantized_by: brooketh
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Phi 3 mini 4k instruct
- **Creator:** [microsoft](https://huggingface.co/microsoft/)
- **Original:** [Phi 3 mini 4k instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
- **Date Created:** 2024-04-22
- **Trained Context:** 4096 tokens
- **Description:** State-of-the-art lightweight open model from Microsoft, trained with the Phi-3 datasets. These include both synthetic data and filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
Mr-Bhaskar/fbt-gemma-7b-inst | Mr-Bhaskar | 2024-05-12T15:02:32Z | 667 | 0 | transformers | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T07:28:37Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2 | ShenaoZhang | 2024-05-13T16:45:57Z | 667 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-13T15:57:10Z | ---
license: mit
base_model: ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_zephyr_5551_4iters_bs256_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_zephyr_5551_4iters_bs256_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_1](https://huggingface.co/ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
turnipseason/latext5 | turnipseason | 2024-06-04T02:58:53Z | 667 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"math",
"normalization",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-26T02:36:06Z | ---
license: mit
language:
- ru
library_name: transformers
pipeline_tag: text2text-generation
tags:
- math
- normalization
---
### Описание:
Модель для нормализации русскоязычных текстов, содержащих математические сущности, в формат LaTeX.
Модель является дообученной на переведённом&аугментированном датасете "[Mathematics Stack Exchange API Q&A Data](https://zenodo.org/records/1414384)" версией модели [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small).
### Description:
This is a model for mathematical text normalization in Russian, based on the [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small) paraphraser.
The model was created by finetuning the paraphraser on a translated&augmented "[Mathematics Stack Exchange API Q&A Data](https://zenodo.org/records/1414384)" dataset.
Пример использования:
---
Usage example:
---
``` python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from IPython.display import display, Math, Latex
model_dir = "turnipseason/latext5"
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
def get_latex(text):
inputs = tokenizer(text, return_tensors='pt').to(device)
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, num_return_sequences=1,
repetition_penalty=1.2,
max_length=len(text),
num_beams=10,
early_stopping=True
)
for h in hypotheses:
display(Latex(tokenizer.decode(h, skip_special_tokens=True)))
text = '''лямбда прописная квадрат минус три равно десять игрек куб
При этом шинус икс равен интеграл от экспоненты до трёх игрек штрих'''
get_latex(text)
``` |
pankajmathur/orca_mini_v5_8b_orpo | pankajmathur | 2024-05-31T16:12:34Z | 667 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-05-31T14:23:02Z | ---
license: llama3
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
---
**Model Name: llama_3_orca_mini_v5_8b_orpo**
# llama_3_orca_mini_v5_8b trained with various ORPO Datasets
<img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
## NOTICE
By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges.
I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model.
Dive in and innovate!
## Evaluation
coming soon...
<br>
## Example Usage
Here is the ChatML prompt format
```
<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant
```
Below shows a code example on how to use this model
```python
from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v5_8b_orpo"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
This model is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
**Quants**
GGUF : Coming Soon
AWQ: Coming Soon
|
RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf | RichardErkhov | 2024-06-02T15:44:30Z | 667 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-02T06:28:33Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CleverMommy-mix-20b - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/CleverMommy-mix-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CleverMommy-mix-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q2_K.gguf) | Q2_K | 6.91GB |
| [CleverMommy-mix-20b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [CleverMommy-mix-20b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [CleverMommy-mix-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [CleverMommy-mix-20b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [CleverMommy-mix-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q3_K.gguf) | Q3_K | 9.04GB |
| [CleverMommy-mix-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [CleverMommy-mix-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [CleverMommy-mix-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [CleverMommy-mix-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q4_0.gguf) | Q4_0 | 10.52GB |
| [CleverMommy-mix-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [CleverMommy-mix-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [CleverMommy-mix-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q4_K.gguf) | Q4_K | 11.22GB |
| [CleverMommy-mix-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [CleverMommy-mix-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q4_1.gguf) | Q4_1 | 11.67GB |
| [CleverMommy-mix-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q5_0.gguf) | Q5_0 | 12.83GB |
| [CleverMommy-mix-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [CleverMommy-mix-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q5_K.gguf) | Q5_K | 13.18GB |
| [CleverMommy-mix-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [CleverMommy-mix-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q5_1.gguf) | Q5_1 | 13.98GB |
| [CleverMommy-mix-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q6_K.gguf) | Q6_K | 15.28GB |
| [CleverMommy-mix-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverMommy-mix-20b-gguf/blob/main/CleverMommy-mix-20b.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
---
An extended part of my effort to create Eileithyia-20B. This model is made by following the recipe below, inverting it, then SLERPing the models back together at 0.5, hopefully fusing the models into one block for use with Harmonia.
slices:
- sources:
- model: microsoft/Orca-2-13b
-
layer_range: [0, 16]
- sources:
- model: athirdpath/Eileithyia-13B
-
layer_range: [8, 24]
- sources:
- model: microsoft/Orca-2-13b
-
layer_range: [17, 32]
- sources:
- model: athirdpath/Eileithyia-13B
-
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
Thanks to Undi95 for pioneering the recipe.
|
diwank/cryptgpt | diwank | 2024-06-05T17:45:26Z | 667 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"axolotl",
"generated_from_trainer",
"base_model:diwank/cryptgpt",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-02T18:11:45Z | ---
base_model: diwank/cryptgpt
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cryptgpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
# See:
# - https://github.com/karpathy/nanoGPT/blob/master/config/train_gpt2.py#L1
# - https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/tiny-llama/pretrain.yml#L14
# - https://github.com/karpathy/nanoGPT/blob/master/train.py#L35
base_model: diwank/cryptgpt
hub_model_id: diwank/cryptgpt
model_type: GPT2LMHeadModel
tokenizer_type: AutoTokenizer
trust_remote_code: true # required for CryptGPTTokenizer
resize_token_embeddings_to_32x: true
output_dir: ./outputs/model-out
datasets:
- path: diwank/encrypted-openwebtext
type: completion
dataset_prepared_path: ./cryptgpt-prepared-dataset
val_set_size: 0.04
shuffle_merged_datasets: false
sequence_len: 1024
pad_to_sequence_len: true
sample_packing: false
pretrain_multipack_attn: false
train_on_inputs: true
gradient_accumulation_steps: 1
micro_batch_size: 64
optimizer: adamw_bnb_8bit
adam_beta1: 0.9
adam_beta2: 0.95
seed: 42
lr_scheduler: cosine
learning_rate: 6e-4
cosine_min_lr_ratio: 0.1 # min: 6e-5
weight_decay: 0.1
bf16: auto
tf32: true
flash_attention: true
torch_compile: true
gradient_checkpointing: false
deepspeed: deepspeed_configs/zero2.json
max_steps: 1200000
eval_steps: 12000
save_steps: 12000
auto_resume_from_checkpoints: true
logging_steps: 1
eval_max_new_tokens: 128
eval_causal_lm_metrics:
- sacrebleu
wandb_project: cryptgpt-0.1
wandb_name: cryptgpt-run-07
```
</details><br>
# cryptgpt
This model is a fine-tuned version of [diwank/cryptgpt](https://huggingface.co/diwank/cryptgpt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 40912
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 10.9453 | 0.0000 | 1 | 10.9383 |
| 3.0117 | 0.2933 | 12000 | 2.8623 |
| 2.5234 | 0.5866 | 24000 | 2.4040 |
| 2.3398 | 0.8799 | 36000 | 2.2717 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tgama/wem_sentiment_model_ollama42 | tgama | 2024-06-20T16:55:56Z | 667 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-20T16:51:17Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** tgama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pysentimiento/robertuito-irony | pysentimiento | 2023-02-20T19:05:39Z | 666 | 2 | pysentimiento | [
"pysentimiento",
"pytorch",
"roberta",
"twitter",
"irony",
"es",
"arxiv:2106.09462",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- es
library_name: pysentimiento
tags:
- twitter
- irony
---
# Irony detection in Spanish
## robertuito-irony
Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with IRosVA 2019 dataset for irony detection. Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets.
The positive class marks irony, the negative class marks not irony.
## Results
Results for the four tasks evaluated in `pysentimiento`. Results are expressed as Macro F1 scores
| model | emotion | hate_speech | irony | sentiment |
|:--------------|:--------------|:--------------|:--------------|:--------------|
| robertuito | 0.560 ± 0.010 | 0.759 ± 0.007 | 0.739 ± 0.005 | 0.705 ± 0.003 |
| roberta | 0.527 ± 0.015 | 0.741 ± 0.012 | 0.721 ± 0.008 | 0.670 ± 0.006 |
| bertin | 0.524 ± 0.007 | 0.738 ± 0.007 | 0.713 ± 0.012 | 0.666 ± 0.005 |
| beto_uncased | 0.532 ± 0.012 | 0.727 ± 0.016 | 0.701 ± 0.007 | 0.651 ± 0.006 |
| beto_cased | 0.516 ± 0.012 | 0.724 ± 0.012 | 0.705 ± 0.009 | 0.662 ± 0.005 |
| mbert_uncased | 0.493 ± 0.010 | 0.718 ± 0.011 | 0.681 ± 0.010 | 0.617 ± 0.003 |
| biGRU | 0.264 ± 0.007 | 0.592 ± 0.018 | 0.631 ± 0.011 | 0.585 ± 0.011 |
Note that for Hate Speech, these are the results for Semeval 2019, Task 5 Subtask B (HS+TR+AG detection)
## Citation
If you use this model in your research, please cite pysentimiento and RoBERTuito papers:
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{perez-etal-2022-robertuito,
title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish",
author = "P{\'e}rez, Juan Manuel and
Furman, Dami{\'a}n Ariel and
Alonso Alemany, Laura and
Luque, Franco M.",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.785",
pages = "7235--7243",
abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.",
}
@inproceedings{ortega2019overview,
title={Overview of the task on irony detection in Spanish variants},
author={Ortega-Bueno, Reynier and Rangel, Francisco and Hern{\'a}ndez Far{\i}as, D and Rosso, Paolo and Montes-y-G{\'o}mez, Manuel and Medina Pagola, Jos{\'e} E},
booktitle={Proceedings of the Iberian languages evaluation forum (IberLEF 2019), co-located with 34th conference of the Spanish Society for natural language processing (SEPLN 2019). CEUR-WS. org},
volume={2421},
pages={229--256},
year={2019}
}
``` |
hazal/BioBERTurkcased-con-trM | hazal | 2022-08-10T11:11:27Z | 666 | 0 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-15T06:29:39Z | # BioBERTurk- Turkish Biomedical Language Models
---
language:
- tr
--- |
timm/maxvit_tiny_tf_384.in1k | timm | 2023-05-11T00:23:39Z | 666 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.01697",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-02T21:57:36Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for maxvit_tiny_tf_384.in1k
An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors.
Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 31.0
- GMACs: 17.5
- Activations (M): 123.4
- Image size: 384 x 384
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_tiny_tf_384.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_tiny_tf_384.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 192, 192])
# torch.Size([1, 64, 96, 96])
# torch.Size([1, 128, 48, 48])
# torch.Size([1, 256, 24, 24])
# torch.Size([1, 512, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_tiny_tf_384.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
keremberke/yolov5s-smoke | keremberke | 2023-01-04T22:14:32Z | 666 | 2 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/smoke-object-detection",
"model-index",
"region:us"
] | object-detection | 2023-01-04T22:13:56Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/smoke-object-detection
model-index:
- name: keremberke/yolov5s-smoke
results:
- task:
type: object-detection
dataset:
type: keremberke/smoke-object-detection
name: keremberke/smoke-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.9945003736307544 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-smoke" src="https://huggingface.co/keremberke/yolov5s-smoke/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-smoke')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-smoke --epochs 10
```
|
Svngoku/distilbert-base-uncased-finetuned-emotion-part-2 | Svngoku | 2023-07-27T15:23:06Z | 666 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:emotion",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-25T09:37:59Z | ---
license: mit
datasets:
- emotion
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
--- |
TheBloke/guanaco-65B-GGUF | TheBloke | 2023-09-27T12:52:48Z | 666 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:timdettmers/guanaco-65b",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-09-19T23:40:31Z | ---
license: other
model_name: Guanaco 65B
base_model: timdettmers/guanaco-65b
inference: false
model_creator: Tim Dettmers
model_type: llama
prompt_template: '### Human: {prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Guanaco 65B - GGUF
- Model creator: [Tim Dettmers](https://huggingface.co/timdettmers)
- Original model: [Guanaco 65B](https://huggingface.co/timdettmers/guanaco-65b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Tim Dettmers' Guanaco 65B](https://huggingface.co/timdettmers/guanaco-65b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/guanaco-65B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/guanaco-65B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-65B-GGUF)
* [Tim Dettmers's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/guanaco-65B-HF)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Guanaco
```
### Human: {prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Guanaco-65B.Q2_K.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q2_K.gguf) | Q2_K | 2 | 27.04 GB| 29.54 GB | smallest, significant quality loss - not recommended for most purposes |
| [Guanaco-65B.Q3_K_S.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q3_K_S.gguf) | Q3_K_S | 3 | 28.16 GB| 30.66 GB | very small, high quality loss |
| [Guanaco-65B.Q3_K_M.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q3_K_M.gguf) | Q3_K_M | 3 | 31.56 GB| 34.06 GB | very small, high quality loss |
| [Guanaco-65B.Q3_K_L.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q3_K_L.gguf) | Q3_K_L | 3 | 34.65 GB| 37.15 GB | small, substantial quality loss |
| [Guanaco-65B.Q4_0.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q4_0.gguf) | Q4_0 | 4 | 36.80 GB| 39.30 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Guanaco-65B.Q4_K_S.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q4_K_S.gguf) | Q4_K_S | 4 | 36.92 GB| 39.42 GB | small, greater quality loss |
| [Guanaco-65B.Q4_K_M.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q4_K_M.gguf) | Q4_K_M | 4 | 39.35 GB| 41.85 GB | medium, balanced quality - recommended |
| [Guanaco-65B.Q5_0.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q5_0.gguf) | Q5_0 | 5 | 44.92 GB| 47.42 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Guanaco-65B.Q5_K_S.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q5_K_S.gguf) | Q5_K_S | 5 | 44.92 GB| 47.42 GB | large, low quality loss - recommended |
| [Guanaco-65B.Q5_K_M.gguf](https://huggingface.co/TheBloke/guanaco-65B-GGUF/blob/main/Guanaco-65B.Q5_K_M.gguf) | Q5_K_M | 5 | 46.24 GB| 48.74 GB | large, very low quality loss - recommended |
| Guanaco-65B.Q6_K.gguf | Q6_K | 6 | 53.56 GB| 56.06 GB | very large, extremely low quality loss |
| Guanaco-65B.Q8_0.gguf | Q8_0 | 8 | 69.37 GB| 71.87 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `Guanaco-65B.Q6_K.gguf-split-a`
* `Guanaco-65B.Q6_K.gguf-split-b`
### q8_0
Please download:
* `Guanaco-65B.Q8_0.gguf-split-a`
* `Guanaco-65B.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat Guanaco-65B.Q6_K.gguf-split-* > Guanaco-65B.Q6_K.gguf && rm Guanaco-65B.Q6_K.gguf-split-*
cat Guanaco-65B.Q8_0.gguf-split-* > Guanaco-65B.Q8_0.gguf && rm Guanaco-65B.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B Guanaco-65B.Q6_K.gguf-split-a + Guanaco-65B.Q6_K.gguf-split-b Guanaco-65B.Q6_K.gguf
del Guanaco-65B.Q6_K.gguf-split-a Guanaco-65B.Q6_K.gguf-split-b
COPY /B Guanaco-65B.Q8_0.gguf-split-a + Guanaco-65B.Q8_0.gguf-split-b Guanaco-65B.Q8_0.gguf
del Guanaco-65B.Q8_0.gguf-split-a Guanaco-65B.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/guanaco-65B-GGUF and below it, a specific filename to download, such as: Guanaco-65B.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/guanaco-65B-GGUF Guanaco-65B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/guanaco-65B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/guanaco-65B-GGUF Guanaco-65B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m Guanaco-65B.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/guanaco-65B-GGUF", model_file="Guanaco-65B.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Tim Dettmers' Guanaco 65B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Tim Dettmers' Guanaco 65B fp16 HF
These files are fp16 HF model files for [Tim Dettmers' Guanaco 65B](https://huggingface.co/timdettmers/guanaco-65b).
It is the result of merging the LoRA then saving in HF fp16 format.
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-65B-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-65B-GGML)
* [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/guanaco-65B-HF)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card
Not provided by original model creator.
<!-- original-model-card end -->
|
OpenLLM-Ro/RoLlama2-7b-Instruct | OpenLLM-Ro | 2024-06-27T21:15:19Z | 666 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ro",
"arxiv:2406.18266",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T17:18:54Z | ---
license: cc-by-nc-4.0
language:
- ro
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
RoLlama2 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **instruct 7B model**. Links to other models can be found at the bottom of this page.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
OpenLLM represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants.
- **Developed by:** OpenLLM-Ro
<!-- - **Funded by [optional]:** [More Information Needed] -->
<!-- - **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
- **Language(s):** Romanian
- **License:** cc-by-nc-4.0
- **Finetuned from model:** [RoLlama2-7b-Base](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/OpenLLM-Ro/llama-recipes
- **Paper:** https://arxiv.org/abs/2406.18266
## Intended Use
### Intended Use Cases
RoLlama2 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Instruct")
model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Instruct")
instruction = "Care este cel mai înalt vârf muntos din România?"
chat = [
{"role": "system", "content": "Ești un asistent folositor, respectuos și onest. Încearcă să ajuți cât mai mult prin informațiile oferite, excluzând răspunsuri toxice, rasiste, sexiste, periculoase și ilegale."},
{"role": "user", "content": instruction},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False)
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
## Benchmarks
| Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA|
|--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| Llama-2-7b-chat | 38.03 | 37.95 | 27.22 | 59.29 | 57.22 | 2.53 | 44.00 |
|*RoLlama2-7b-Instruct*|***45.71***|***43.66***|***39.70***|***70.34*** | *57.36* |***18.78***| *44.44* |
|RoLlama2-7b-Chat | 43.82 | 41.92 | 37.29 | 66.68 | **57.91**| 13.47 | **45.65**|
## MT-Bench
| Model | Average | 1st turn | 2nd turn | Answers in Ro |
|--------------------|:--------:|:--------:|:--------:|:--------:|
| Llama-2-7b-chat | 1.21 | 1.68 | 0.74 | 44 / 160 |
|*RoLlama2-7b-Instruct*| ***3.70***|***4.74***| ***2.66*** | ***160 / 160*** |
|RoLlama2-7b-Chat | TBC | TBC | TBC | TBC |
## RoCulturaBench
| Model | Score | Answers in Ro|
|--------------------|:--------:|:--------:|
| Llama-2-7b-chat | 1.72 | 48 / 100 |
|*RoLlama2-7b-Instruct*| ***3.43***| ***160 / 160*** |
|RoLlama2-7b-Chat | TBC | TBC |
## RoLlama2 Model Family
| Model | Link |
|--------------------|:--------:|
|RoLlama2-7b-Base | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base) |
|*RoLlama2-7b-Instruct*| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct) |
|RoLlama2-7b-Chat | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat) |
## Citation
```
@misc{masala2024vorbecstiromanecsterecipetrain,
title={"Vorbe\c{s}ti Rom\^ane\c{s}te?" A Recipe to Train Powerful Romanian LLMs with English Instructions},
author={Mihai Masala and Denis C. Ilie-Ablachim and Alexandru Dima and Dragos Corlatescu and Miruna Zavelca and Ovio Olaru and Simina Terian-Dan and Andrei Terian-Dan and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea},
year={2024},
eprint={2406.18266},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.18266},
}
```
<!-- **APA:**
[More Information Needed] -->
|
kuleshov-group/caduceus-ps_seqlen-131k_d_model-256_n_layer-16 | kuleshov-group | 2024-06-11T02:22:50Z | 666 | 10 | transformers | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:2403.03234",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2024-02-29T22:41:31Z | ---
library_name: transformers
license: apache-2.0
---
## Using Caduceus
To use the pre-trained model for masked language modeling, use the following snippet:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
# See the `Caduceus` collection page on the hub for list of available models.
model_name = "kuleshov-group/caduceus-ps_seqlen-131k_d_model-256_n_layer-16"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
```
Alternatively, you can instantiate a model from scratch to train on your own data as follows:
```python
from transformers import AutoConfig, AutoModelForMaskedLM
# Add any config overrides here, see the `config.json` file on the hub for details.
config_overrides = {}
# See the `Caduceus` collection page on the hub for list of available models.
config = AutoConfig.from_pretrained(
"kuleshov-group/caduceus-ps_seqlen-131k_d_model-256_n_layer-16",
**config_overrides,
)
model = AutoModelForMaskedLM.from_config(config)
```
## Model Details
This is the Caduceus-PS model with hidden dimension 256 and 16 MambaDNA layers.
This model is reverse complement (RC) equivariant and thus no RC data augmentation is required when training this model, either during pre-training or for downstream fine-tuning.
Note that the model hidden state will be **twice** that of a non-RC equivariant counterpart.
For downstream task training and inference, and to ensure RC **invariant** outputs at downstream time, one can either run the downstream model on the hidden state and its RC or one can take the hidden state and its RC and average them before passing to the downstream model.
To RC the hidden states, one can use: `hidden_states.flip(dim=(-2, -1))` which will flip along the sequence lenght and channel dimensions.
This model was pre-trained on the human reference genome with sequence length 131,072 for 50k steps (each step contained ~1M base pairs / tokens).
For more details, please see our paper: [Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling](https://arxiv.org/abs/2403.03234).
## Citation
Please cite our work using the bibtex below:
**BibTeX:**
```
@article{schiff2024caduceus,
title={Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling},
author={Schiff, Yair and Kao, Chia-Hsiang and Gokaslan, Aaron and Dao, Tri and Gu, Albert and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2403.03234},
year={2024}
}
```
## Model Card Contact
Yair Schiff ([email protected]) |
allknowingroger/TaoPassthrough-15B-s | allknowingroger | 2024-04-10T19:02:28Z | 666 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"zhengr/MixTAO-7Bx2-MoE-v8.1",
"base_model:zhengr/MixTAO-7Bx2-MoE-v8.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T06:57:19Z | ---
tags:
- merge
- mergekit
- lazymergekit
- zhengr/MixTAO-7Bx2-MoE-v8.1
base_model:
- zhengr/MixTAO-7Bx2-MoE-v8.1
- zhengr/MixTAO-7Bx2-MoE-v8.1
- zhengr/MixTAO-7Bx2-MoE-v8.1
- zhengr/MixTAO-7Bx2-MoE-v8.1
- zhengr/MixTAO-7Bx2-MoE-v8.1
license: apache-2.0
---
# TaoPassthrough-15B-s
TaoPassthrough-15B-s is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [0,9]
- sources:
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [5,14]
- sources:
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [10,19]
- sources:
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [15,24]
- sources:
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [20,32]
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/TaoPassthrough-15B-s"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
NeverSleep/NoromaidxOpenGPT4-2-GGUF-iMatrix | NeverSleep | 2024-03-26T15:40:40Z | 666 | 18 | null | [
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"nsfw",
"arxiv:2306.01708",
"base_model:NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3",
"base_model:rombodawg/Open_Gpt4_8x7B_v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-26T14:54:33Z | ---
base_model:
- NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3
- rombodawg/Open_Gpt4_8x7B_v0.2
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mergekit
- merge
- not-for-all-audiences
- nsfw
license: cc-by-nc-4.0
---
<!-- description start -->
## Description
This repo contains fp16 files of NoromaidxOpenGPT4-2.
The model was created by merging Noromaid-8x7b-Instruct with Open_Gpt4_8x7B_v0.2 the exact same way [Rombodawg](https://huggingface.co/rombodawg) done his merge.
The only difference between [NoromaidxOpenGPT4-1](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-1-GGUF-iMatrix/) and [NoromaidxOpenGPT4-2](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-2-GGUF-iMatrix/) is that the first iteration use Mixtral-8x7B as a base for the merge (f16), where the second use Open_Gpt4_8x7B_v0.2 as a base (bf16).
After further testing and usage, the two model was released, because they each have their own qualities.
You can download the imatrix file to do many other quant [HERE](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-2/blob/main/imatrix-2.dat).
<!-- description end -->
<!-- prompt-template start -->
### Prompt template:
## Alpaca
```
### Instruction:
{system prompt}
### Input:
{prompt}
### Response:
{output}
```
## Mistral
```
[INST] {prompt} [/INST]
```
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [rombodawg/Open_Gpt4_8x7B_v0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3)
* [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
parameters:
density: .5
weight: 1
- model: NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3
parameters:
density: .5
weight: .7
merge_method: ties
base_model: rombodawg/Open_Gpt4_8x7B_v0.2
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
### Support
If you want to support us, you can [here](https://ko-fi.com/undiai). |
nbeerbower/slerp-bophades-truthy-math-mistral-7B | nbeerbower | 2024-04-06T13:03:05Z | 666 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:nbeerbower/bophades-mistral-math-DPO-7B",
"base_model:nbeerbower/bophades-mistral-truthy-DPO-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T03:32:08Z | ---
license: apache-2.0
base_model:
- nbeerbower/bophades-mistral-math-DPO-7B
- nbeerbower/bophades-mistral-truthy-DPO-7B
library_name: transformers
tags:
- mergekit
- merge
---

# slerp-bophades-truthy-math-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bophades-mistral-math-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-math-DPO-7B)
* [nbeerbower/bophades-mistral-truthy-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-truthy-DPO-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bophades-mistral-truthy-DPO-7B
layer_range: [0, 32]
- model: nbeerbower/bophades-mistral-math-DPO-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/bophades-mistral-truthy-DPO-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
wandb/Mistral-7B-v0.2 | wandb | 2024-04-11T16:04:30Z | 666 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-11T15:57:00Z | ---
library_name: transformers
tags: []
---
This is just a conversion of the Mistral-7B-v0.2 model shared by the Mistral team during the SF hackathon.
The script used was from [transformers](https://github.com/huggingface/transformers/blob/6cdbd73e01a9719bfaec07d91fd108e8d932bbbb/src/transformers/models/mistral/convert_mistral_weights_to_hf.py#L4)
|
baconnier/banker | baconnier | 2024-05-08T13:58:41Z | 666 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-18T16:44:22Z | ---
language:
- en
license: mit
library_name: transformers
---
Deprecated See
baconnier/finance_dolphin_orpo_llama3_8B_r64_51K
https://huggingface.co/baconnier/finance_dolphin_orpo_llama3_8B_r64_51K
|
Vipinap/RAFT-llama3-8b-v1 | Vipinap | 2024-05-14T11:40:33Z | 666 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:raft_llama3_8b_ft_stable",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-14T11:25:02Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: raft_llama3_8b_ft_stable
---
# Uploaded model
- **Developed by:** Vipinap
- **License:** apache-2.0
- **Finetuned from model :** raft_llama3_8b_ft_stable
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf | RichardErkhov | 2024-06-04T02:30:40Z | 666 | 0 | null | [
"gguf",
"arxiv:2405.04324",
"region:us"
] | null | 2024-06-04T01:24:21Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
granite-20b-code-instruct - GGUF
- Model creator: https://huggingface.co/ibm-granite/
- Original model: https://huggingface.co/ibm-granite/granite-20b-code-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [granite-20b-code-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q2_K.gguf) | Q2_K | 7.38GB |
| [granite-20b-code-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.IQ3_XS.gguf) | IQ3_XS | 8.06GB |
| [granite-20b-code-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.IQ3_S.gguf) | IQ3_S | 0.79GB |
| [granite-20b-code-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q3_K_S.gguf) | Q3_K_S | 0.56GB |
| [granite-20b-code-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.IQ3_M.gguf) | IQ3_M | 0.06GB |
| [granite-20b-code-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q3_K.gguf) | Q3_K | 0.04GB |
| [granite-20b-code-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q3_K_M.gguf) | Q3_K_M | 0.0GB |
| [granite-20b-code-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q3_K_L.gguf) | Q3_K_L | 0.0GB |
| [granite-20b-code-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.IQ4_XS.gguf) | IQ4_XS | 0.0GB |
| [granite-20b-code-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q4_0.gguf) | Q4_0 | 0.0GB |
| [granite-20b-code-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.IQ4_NL.gguf) | IQ4_NL | 0.0GB |
| [granite-20b-code-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q4_K_S.gguf) | Q4_K_S | 0.0GB |
| [granite-20b-code-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q4_K.gguf) | Q4_K | 0.0GB |
| [granite-20b-code-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q4_K_M.gguf) | Q4_K_M | 0.0GB |
| [granite-20b-code-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q4_1.gguf) | Q4_1 | 0.0GB |
| [granite-20b-code-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q5_0.gguf) | Q5_0 | 0.0GB |
| [granite-20b-code-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q5_K_S.gguf) | Q5_K_S | 0.0GB |
| [granite-20b-code-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q5_K.gguf) | Q5_K | 0.0GB |
| [granite-20b-code-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q5_K_M.gguf) | Q5_K_M | 0.0GB |
| [granite-20b-code-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q5_1.gguf) | Q5_1 | 0.0GB |
| [granite-20b-code-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q6_K.gguf) | Q6_K | 0.0GB |
| [granite-20b-code-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-gguf/blob/main/granite-20b-code-instruct.Q8_0.gguf) | Q8_0 | 0.0GB |
Original model description:
---
pipeline_tag: text-generation
base_model: ibm-granite/granite-20b-code-base
inference: true
license: apache-2.0
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-20b-code-instruct
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 60.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 53.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 58.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 44.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 49.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 29.9
veriefied: false
---

# Granite-20B-Code-Instruct
## Model Summary
**Granite-20B-Code-Instruct** is a 20B parameter model fine tuned from *Granite-20B-Code-Base* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills.
- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324)
- **Release Date**: May 6th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Usage
### Intended use
The model is designed to respond to coding related instructions and can be used to build coding assitants.
<!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 -->
### Generation
This is a simple example of how to use **Granite-20B-Code-Instruct** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-20b-code-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Write a code to find the maximum value in a list of numbers." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
<!-- TO DO: Check this part -->
## Training Data
Granite Code Instruct models are trained on the following types of data.
* Code Commits Datasets: we sourced code commits data from the [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft) dataset, a filtered version of the full CommitPack dataset. From CommitPackFT dataset, we only consider data for 92 programming languages. Our inclusion criteria boils down to selecting programming languages common across CommitPackFT and the 116 languages that we considered to pretrain the code-base model (*Granite-20B-Code-Base*).
* Math Datasets: We consider two high-quality math datasets, [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) and [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA). Due to license issues, we filtered out GSM8K-RFT and Camel-Math from MathInstruct dataset.
* Code Instruction Datasets: We use [Glaive-Code-Assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [Glaive-Function-Calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [NL2SQL11](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction) and a small collection of synthetic API calling datasets.
* Language Instruction Datasets: We include high-quality datasets such as [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) and an open license-filtered version of [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). We also include a collection of hardcoded prompts to ensure our model generates correct outputs given inquiries about its name or developers.
## Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
## Ethical Considerations and Limitations
Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-20B-Code-Base](https://huggingface.co/ibm-granite/granite-20b-code-base)* model card.
|
netcat420/MFANN3bv0.14-GGUF | netcat420 | 2024-06-29T04:46:08Z | 666 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | 2024-06-29T04:26:30Z | ---
license: mit
---
|
stablediffusionapi/dark-sushi-mix | stablediffusionapi | 2023-04-24T19:57:54Z | 665 | 21 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-24T19:53:32Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Dark Sushi Mix API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "dark-sushi-mix"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/dark-sushi-mix)
Credits: [View credits](https://civitai.com/?query=Dark%20Sushi%20Mix)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "dark-sushi-mix",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
sail-rvc/EricCartman | sail-rvc | 2023-07-14T07:22:16Z | 665 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | 2023-07-14T07:21:57Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# EricCartman
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:22:16
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
TheBloke/Huginn-13B-v4.5-GGUF | TheBloke | 2023-09-27T12:46:43Z | 665 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:The-Face-Of-Goonery/Huginn-13b-v4.5",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-08-30T10:34:52Z | ---
license: llama2
model_name: Huginn 13B v4.5
base_model: The-Face-Of-Goonery/Huginn-13b-v4.5
inference: false
model_creator: Caleb Morgan
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Huginn 13B v4.5 - GGUF
- Model creator: [Caleb Morgan](https://huggingface.co/The-Face-Of-Goonery)
- Original model: [Huginn 13B v4.5](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-v4.5)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Caleb Morgan's Huginn 13B v4.5](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-v4.5).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Huginn-13B-v4.5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF)
* [Caleb Morgan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-v4.5)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [huginn-13b-v4.5.Q2_K.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [huginn-13b-v4.5.Q3_K_S.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [huginn-13b-v4.5.Q3_K_M.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [huginn-13b-v4.5.Q3_K_L.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [huginn-13b-v4.5.Q4_0.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [huginn-13b-v4.5.Q4_K_S.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [huginn-13b-v4.5.Q4_K_M.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [huginn-13b-v4.5.Q5_0.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [huginn-13b-v4.5.Q5_K_S.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [huginn-13b-v4.5.Q5_K_M.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [huginn-13b-v4.5.Q6_K.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [huginn-13b-v4.5.Q8_0.gguf](https://huggingface.co/TheBloke/Huginn-13B-v4.5-GGUF/blob/main/huginn-13b-v4.5.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Huginn-13B-v4.5-GGUF and below it, a specific filename to download, such as: huginn-13b-v4.5.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Huginn-13B-v4.5-GGUF huginn-13b-v4.5.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Huginn-13B-v4.5-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Huginn-13B-v4.5-GGUF huginn-13b-v4.5.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m huginn-13b-v4.5.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Huginn-13B-v4.5-GGUF", model_file="huginn-13b-v4.5.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Caleb Morgan's Huginn 13B v4.5
well there goes several hours of my life
2 models in one day lol....
https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b merged with https://huggingface.co/Sao10K/Mythical-Destroyer-V2-L2-13B
alpaca format.
<!-- original-model-card end -->
|
Yntec/HassanRemix | Yntec | 2023-09-06T00:45:46Z | 665 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Hassan",
"license:creativeml-openrail-m",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-05T12:44:41Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Hassan
inference: false
---
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
# Hassan Remix
This is me giving up on making a SFW version of this. It has the MoistMixVAE baked in. When you remove SD1.5 from HassanBlend1.2, you end with a fully functional model, except that it'll generate NSFW unprompted, requires negative prompts to stop that, and is unsuitable for the inference API. In an attempt to solve this I produced several versions, until I thought I was done with HassanRemix. It was as close as I could get, but the model still generated nudity at random. I even tried to add a LoRA that adds clothes, girls would appear fully clothed but with the bottom of their skirts shown and nothing covering them. Instead of generating black boxes the interface will remain off. This was the only sample image that I made that I can show.
Preview and Prompt:

thomas kinkade. Pretty CUTE girl sitting, CHIBI EYES, gorgeous detailed hair, Magazine ad, iconic, 1949, A beautiful rainbow gradient bloom flying of a landscape. sharp focus. acrylic art on canvas by ROSSDRAWS and Clay Mann. t is a benji davies and colorful view of an idyllic, dreamlike world with charles sillem lidderdale, DETAILED
# Hassan NSFW
HassanNSFW and HassanEssence, were intermediate models that were also good looking but rejected because subjects were generated naked.
# Hassan Tease
HassanTease and HassanAlpha were models with this LoRA added: https://civitai.com/models/88132/leosams-clothing-adjuster-lora - merged with a -1.0 strenght, but not enough to stop the nudity.
# Humu NSFW
And attempt to mix https://civitai.com/models/136799 in to solve the problem, which also failed. This is a temporary location for this model before moving to upcomign model Hassanhumu
# Recipes
-Add Difference 1.0-
Primary model:
HassanBlend1.2
Secondary model:
HassanBlend1.2
Tertiary model:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output Model:
HassanEssense
-Weighted Sum 0.50-
Primary model:
HassanBlend1.2
Secondary model:
HassanEssense
Output Model:
HassanNSFW
-Weighted Sum 0.50-
Primary model:
humu
Secondary model:
HassanEssense
Output Model:
humuNSFW
-Merge LoRA -1.0 -
Primary model:
HassanEssense
Merge LoRA to checkpoint:
leosams-clothing-adjuster-lora
Output Model:
HassanAlpha
-Weighted Sum 0.85-
Primary model:
HassanBlend1.2
Secondary model:
HassanAlpha
Output Model:
HassanTease
-Weighted Sum Train Difference MBW WRAP16 -
Primary model:
HassanBlend1.2
Secondary model:
HassanEssense
Output Model:
HassanRemix
-Bake MoistMixV2 VAE-
Output Model:
HassanRemixVAE |
ricochet/lora-sdxl-sheppy-exel | ricochet | 2024-01-08T14:04:54Z | 665 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:ricochet/shep-exel",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-01-08T12:53:53Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: sshheeppppyy
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- ricochet/shep-exel
---
# LoRA DreamBooth - ricochet/lora-sdxl-sheppy-exel
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
sshheeppppyy
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'ricochet/lora-sdxl-sheppy-exel',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic sshheeppppyy jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
chs20/fare4-clip | chs20 | 2024-06-16T06:24:07Z | 665 | 1 | open_clip | [
"open_clip",
"zero-shot-image-classification",
"arxiv:2402.12336",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2024-02-23T16:22:45Z | ---
license: mit
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
[[Paper]](https://arxiv.org/abs/2402.12336) [[GitHub]](https://github.com/chs20/RobustVLM)
FARE CLIP ViT-L/14 model.
Unsupervised adversarial fine-tuning from Openai CLIP initialization on ImageNet with infinity-norm and radius 4/255.
## Usage
```python
model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:chs20/fare4-clip')
```
## Citation
If you find this model useful, please consider citing our paper:
```bibtex
@article{schlarmann2024robustclip,
title={Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models},
author={Christian Schlarmann and Naman Deep Singh and Francesco Croce and Matthias Hein},
year={2024},
journal={ICML}
}
``` |
meta-llama/CodeLlama-34b-hf | meta-llama | 2024-03-14T18:41:08Z | 665 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T00:24:52Z | ---
extra_gated_heading: You need to share contact information with Meta to access this model
extra_gated_prompt: >-
### LLAMA 2 COMMUNITY LICENSE AGREEMENT
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at
https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity's behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or
element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual
property or other rights owned by Meta embodied in the Llama Materials to
use, reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this
Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama
Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licensee's affiliates, is greater than 700 million monthly
active users in the preceding calendar month, you must request a license from
Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to exercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing
the Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or counterclaim in a lawsuit) alleging that
the Llama Materials or Llama 2 outputs or results, or any portion of any of
the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under
this Agreement shall terminate as of the date such litigation or claim is
filed or instituted. You will indemnify and hold harmless Meta from and
against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California
shall have exclusive jurisdiction of any dispute arising out of this
Agreement.
USE POLICY
### Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 2. If you access or use Llama 2, you agree to this Acceptable
Use Policy (“Policy”). The most recent copy of this policy can be found at
[ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
#### Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not
use, or allow others to use, Llama 2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 2 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 2 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model:
[github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
* Reporting risky content generated by the model:
[developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns:
[facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of
Llama: [[email protected]](mailto:[email protected])
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- code
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 34B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [meta-llama/CodeLlama-7b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) | [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) |
| 13B | [meta-llama/CodeLlama-13b-hf](https://huggingface.co/meta-llama/CodeLlama-13b-hf) | [meta-llama/CodeLlama-13b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Python-hf) | [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf) |
| 34B | [meta-llama/CodeLlama-34b-hf](https://huggingface.co/meta-llama/CodeLlama-34b-hf) | [meta-llama/CodeLlama-34b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Python-hf) | [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) |
| 70B | [meta-llama/CodeLlama-70b-hf](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | [meta-llama/CodeLlama-70b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Python-hf) | [meta-llama/CodeLlama-70b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers:
```bash
pip install transformers.git accelerate
```
Model capabilities:
- [x] Code completion.
- [ ] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "meta-llama/CodeLlama-34b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base version of the 34B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide). |
UnfilteredAI/NSFW-gen | UnfilteredAI | 2024-04-16T15:56:01Z | 665 | 38 | diffusers | [
"diffusers",
"pytorch",
"onnx",
"safetensors",
"NSFW",
"UnfilteredAI",
"not-for-all-audiences",
"text-to-image",
"en",
"base_model:OEvortex/PixelGen",
"doi:10.57967/hf/2028",
"license:other",
"region:us"
] | text-to-image | 2024-04-01T10:04:12Z | ---
base_model: OEvortex/PixelGen
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- NSFW
- UnfilteredAI
- not-for-all-audiences
---
**Updated and working model [here](https://huggingface.co/UnfilteredAI/NSFW-gen-v2)**
**Model Name:** NSFW-gen
**Type:** Text-to-Image Generator
**Description:** NSFW-gen is an text-to-image generator developed by UnfilteredAI. This model is designed to generate all kinds on images including explicit and NSFW (Not Safe For Work) images from textual inputs.
**Features:**
- **Uncensored Output:** The model produces uncensored and potentially explicit images based on textual inputs.
- **Tensor Type:** Operates with FP16 tensor type for optimized performance and efficiency.
- **Large Model Size:** With 3.47 billion parameters, the model offers a vast capacity for learning and generating diverse imagery.
- **Community Engagement:** As part of MysteriousAI's open source initiatives, the model encourages collaboration and contributions from the AI community.
**Usage Guidelines:**
- **Responsible Use:** Exercise discretion and responsibility when generating content with this model.
- **Age Restriction:** Due to the explicit nature of the generated content, usage is restricted to individuals over the legal age in their jurisdiction.
- **Ethical Considerations:** Avoid using the model to create harmful or offensive imagery.
**Get Involved:**
- **Contribute:** Help enhance the capabilities and ethical considerations of the model by contributing to its development on MysteriousAI's open source platform.
- **Explore:** Dive into the uncensored AI-generated imagery produced by the model to explore its creative potential and applications.
- **Connect:** Engage with the MysteriousAI community to share insights, feedback, and ideas related to NSFW content generation and AI ethics. |
allknowingroger/TripleMerge-7B-Ties | allknowingroger | 2024-04-10T18:32:36Z | 665 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"allknowingroger/limyClown-7B-slerp",
"allknowingroger/LeeMerge-7B-slerp",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:allknowingroger/limyClown-7B-slerp",
"base_model:allknowingroger/LeeMerge-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T08:03:18Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/LeeMerge-7B-slerp
base_model:
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/LeeMerge-7B-slerp
license: apache-2.0
---
# TripleMerge-7B-Ties
TripleMerge-7B-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [allknowingroger/limyClown-7B-slerp](https://huggingface.co/allknowingroger/limyClown-7B-slerp)
* [allknowingroger/LeeMerge-7B-slerp](https://huggingface.co/allknowingroger/LeeMerge-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: allknowingroger/MultiverseEx26-7B-slerp
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: allknowingroger/limyClown-7B-slerp
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: allknowingroger/LeeMerge-7B-slerp
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: allknowingroger/limyClown-7B-slerp
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/TripleMerge-7B-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
rmdhirr/Multiparadigm_7B | rmdhirr | 2024-04-09T17:27:21Z | 665 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"roleplay",
"MTSAIR/multi_verse_model",
"ResplendentAI/Paradigm_7B",
"conversational",
"en",
"base_model:MTSAIR/multi_verse_model",
"base_model:ResplendentAI/Paradigm_7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T13:55:00Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- mergekit
- mistral
- roleplay
- MTSAIR/multi_verse_model
- ResplendentAI/Paradigm_7B
base_model:
- MTSAIR/multi_verse_model
- ResplendentAI/Paradigm_7B
model-index:
- name: Multiparadigm_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Multiparadigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Multiparadigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Multiparadigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.87
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Multiparadigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Multiparadigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Multiparadigm_7B
name: Open LLM Leaderboard
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ad2502043d53781aad2ee4/kmTOHCPDzyflcIrXPzgT5.png" alt="image" width="540" height="540" style="margin-bottom: 30px;">
# 🌠 Multiparadigm_7B
Multiparadigm_7B is a merge of the following models:
* [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model)
* [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B)
# Quantizations
Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Multiparadigm_7B-GGUF).
# Configuration
```yaml
slices:
- sources:
- model: MTSAIR/multi_verse_model
layer_range: [0, 32]
- model: ResplendentAI/Paradigm_7B
layer_range: [0, 32]
merge_method: slerp
base_model: MTSAIR/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.6, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.6, 0.7, 0.3, 0]
- value: 0.6
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rmdhirr__Multiparadigm_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.08|
|AI2 Reasoning Challenge (25-Shot)|73.21|
|HellaSwag (10-Shot) |88.95|
|MMLU (5-Shot) |64.28|
|TruthfulQA (0-shot) |76.87|
|Winogrande (5-shot) |83.82|
|GSM8k (5-shot) |69.37|
|
Niggendar/duchaitenPonyXLNo_v20 | Niggendar | 2024-05-03T09:44:54Z | 665 | 4 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-03T09:40:22Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
temporary0-0name/pragna-gguf | temporary0-0name | 2024-06-13T07:11:23Z | 665 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-01T04:31:39Z | Entry not found |
mradermacher/Elysium2.2-task-11b-GGUF | mradermacher | 2024-06-11T23:27:49Z | 665 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Trinity_Notus-xb",
"powermove72/GreenScorpius-xb-Passthrough",
"en",
"base_model:powermove72/Elysium2.2-task-11b",
"endpoints_compatible",
"region:us"
] | null | 2024-06-11T22:44:06Z | ---
base_model: powermove72/Elysium2.2-task-11b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Trinity_Notus-xb
- powermove72/GreenScorpius-xb-Passthrough
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/Elysium2.2-task-11b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.2-task-11b-GGUF/resolve/main/Elysium2.2-task-11b.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
d0rj/rut5-base-summ | d0rj | 2023-10-05T06:21:49Z | 664 | 17 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"dialogue-summarization",
"ru",
"en",
"dataset:d0rj/samsum-ru",
"dataset:IlyaGusev/gazeta",
"dataset:zjkarina/matreshka",
"dataset:rcp-meetings/rudialogsum_v2",
"dataset:GEM/wiki_lingua",
"dataset:mlsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | 2023-06-21T15:11:27Z | ---
language:
- ru
- en
tags:
- summarization
- dialogue-summarization
- text2text-generation
- t5
datasets:
- d0rj/samsum-ru
- IlyaGusev/gazeta
- zjkarina/matreshka
- rcp-meetings/rudialogsum_v2
- GEM/wiki_lingua
- mlsum
metrics:
- bleu
- rouge
widget:
- example_title: Diploma Introduction
text: 'Актуальность проблемы. Электронная информация играет все большую роль во
всех сферах жизни современного общества. В последние годы объем научно-технической
текстовой информации в электронном виде возрос настолько, что возникает угроза
обесценивания этой информации в связи с трудностями поиска необходимых сведений
среди множества доступных текстов. Развитие информационных ресурсов Интернет многократно
усугубило проблему информационной перегрузки. В этой ситуации особенно актуальными
становятся методы автоматизации реферирования текстовой информации, то есть методы
получения сжатого представления текстовых документов–рефератов (аннотаций). Постановка проблемы автоматического
реферирования текста и соответственно попытки ее решения с использованием различных
подходов предпринимались многими исследователями. История применения вычислительной
техники для реферирования насчитывает уже более 50 лет и связана с именами таких
исследователей, как Г.П. Лун, В.Е. Берзон, И.П. Cевбо, Э.Ф. Скороходько, Д.Г.
Лахути, Р.Г. Пиотровский и др. За эти годы выработаны многочисленные подходы
к решению данной проблемы, которые достаточно четко подразделяются на два направления:
автоматическое реферирование, основанное на экстрагировании из первичных документов
с помощью определенных формальных признаков «наиболее информативных» фраз (фрагментов),
совокупность которых образует некоторый экстракт; автоматическое реферирование,
основанное на выделении из текстов с помощью специальных информационных языков
наиболее существенной информации и порождении новых текстов (рефератов), содержательно
обобщающих первичные документы.
'
- example_title: Biological Info
text: Первую многоножку, у которой более тысячи ног, обнаружили в австралийских
пещерах биологи, изучавшие там подземные воды. Предыдущей рекордсменкой по количеству
ног была 700-ногая многоножка. Новый вид имеет длинное тонкое тело, похожее на
нить, и большое количество конечностей, по-видимому, дает преимущества для быстрого
перемещения и проникновения в труднодоступные места — ученые полагают, такая многоножка
может спокойно перемещаться по трещинам в камнях. Австралия известна своими огромными
и жутковатыми животными вроде 25-сантиметровых пауков. Теперь список пугающих
членистоногих пополнился самой «многоногой» в мире многоножкой, у которой более
тысячи ног. Необычное животное обнаружила группа исследователей из Австралии и
США в пещерах на западе страны. Подробнее многоножку ученые описали в статье в
журнале Scientific Reports. Исследователи занимались оценкой воздействия подземных
вод на окружающую среду в зоне добычи полезных ископаемых на западе страны, когда
наткнулись на новый вид многоножек. В отличие от большинства сородичей, живущих
на поверхности, эти многоножки обитали в пещерах на глубине до 60 метров. Новый
вид исследователи назвали Eumillipes persephone, в честь Персефоны — древнегреческой
богини подземного мира. У многоножки оказалось 1306 ног — больше, чем у любого
другого известного вида. Предыдущей рекордсменкой была калифорнийская Illacme
plenipes, у которой насчитывалось до 750 ног. «Эти животные были настолько уникальны,
— говорит биолог Бруно Бузатто. — Как только я понял, какой длины они были...
Стало ясно, что это что-то совершенно новое». У Е. persephone нитевидное тело
длиной около 9,5 см и шириной всего миллиметр, состоящее из 330 сегментов, короткие
ноги и конусообразная голова. Как и другие животные, живущие в постоянной темноте,
эти многоножки бледны и слепы. Энтомолог Пол Марек сравнивает ее с белой нитью,
выдернутой из рубашки. Чтобы посчитать количество ног, ученым пришлось сначала
снять многоножку в высоком разрешении, а затем закрашивать на фото каждый десяток
ног другим цветом. (https://www.gazeta.ru/science/2021/12/17_a_14325355.shtml)
model-index:
- name: d0rj/rut5-base-summ
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 28.8694
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTdhMjM4YWJiZTBiNmJhNTc0ZjQ4M2E1M2RjMDI4YThmYjEzMmJmZjdlOGRjOWRiZDA2YWEzOTU0MTMwYmJjNCIsInZlcnNpb24iOjF9.6bddYV-Rnp4zhgQjbv4cHKtXHDCLknfxTH2bYsVe6R9wgW45gUSck61EIeJdqj0PS7Vi2zcz8YW4DEXDd-UECQ
- type: rouge
value: 8.4686
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2NiOTdkNTE0NjVmYTcyMDY1ZjYzMjZlZjI1MDViOWFmM2M1MDZjOGZiYThlMTc4MWY5YThlNDJmOGYyNWRmZCIsInZlcnNpb24iOjF9.BLeNnva3EJFMz8xwXZamguKWhLyaGTRFr1C12Yh8xTlimrc_mENHqwshJxdi4RULcGwlQmjGjXNw1DMJ42pDCQ
- type: rouge
value: 24.2357
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjM2OGMwMGYzOTgwNjhlZjhjNjU2YzU3MDZkMjljMzExZjYwNTU1MDM0YmM1NmE3OTUyYzBhMzcyOGM5ZmY1MSIsInZlcnNpb24iOjF9.ttNXKqK9rTsMM1aj4XvXUVuJZZAtgG2JE2NI3ZWT4kVcsC7F6mYFXNfUKEk2koKPkq0gwdPiAc-wrpbtmQg5Cg
- type: rouge
value: 25.8543
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmMzNTdjMGFkNzhiYzFkYWMwNWRlZTM5ZTMxMjU5MjFiMDVjNDk3MWFlMTM5Yzc4MTFjYzNhODVmNjQ3ZTBmNyIsInZlcnNpb24iOjF9.80eUellPpBKzjW-kKDjr6WlzFh_nyC2Q4-gNtzQekOzUyuPynl913nWh4NcAw5YXbYC24dklshdFD5VheeYlBQ
- type: loss
value: 3.2456042766571045
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWEwZTRlMDEyZDQ0ZTBiMzAyZTkyZWUwOGYyODQ0NzA1MWM4NTE3ZDBjYTYxNmMxODIyNmNiZDM0MmYzMzNjZCIsInZlcnNpb24iOjF9.evm57JUbHamIx2FcFmjRHwvZ3e818BYxT8tuv26KVnq9IHO8xyMHPes9slOMhdTijAT_leexMIKMRT_iFOGHAg
- type: gen_len
value: 27.5543
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTIzNWY2NGEyNjQyMzdmYmM4ZDI5MTVkZDQ0ZTk3NmFiOWRjNTdiMjU3MmQwY2NhZTEzNTVhZGU5NTZhN2JiZiIsInZlcnNpb24iOjF9.7C3OQP2bl0EZRhRlRYtxd6cQapif0a2Vq5kRFiSdX6KLxU_QALZAlh_9DFyyYPh39R3e6Hyi438Ox6BGaalmBA
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- type: rouge
value: 16.1279
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTgxZjgzZjAwOGViNTNjZDZhNTBhNzUzOTJiYmRkOWM5NDc0MjAxNzdjZDMyMDhlYzFiN2NlNGI5OTVkNDNiOCIsInZlcnNpb24iOjF9.Ix7CPD9gti8W7OMF4NtbMejt6fPEoZRJw2O-GeV-JRkgrIRXkrifGGE55BHdQyEe2KRcAcNCTav6igtiuEd7Cw
- type: rouge
value: 2.1128
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmViN2Y2OTA2YmE4YWYyODUxZDljNzkwMTM0ZWQ0ZTdjYWIzOTY0NzZjZTA1ZWNhNmE3OTU0NDAzNjVkMmE4ZSIsInZlcnNpb24iOjF9.G0EZ_XzRCLNeNL32Hzr7nGQmbX8rOjHfVuZW2zyIgnzenMMHWzAtN5vqq7R9ZVPP3roLFD5XSFyigHchIqu2Cg
- type: rouge
value: 12.2033
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTM3YTY2YWY5M2VhM2MyNWZhOGZlNTZlMjk5Y2MzMGViYTAyOGJjODZkZTBhMjZjOWViYTI4Y2RkMmEwZTQxNCIsInZlcnNpb24iOjF9.i3wgF36CvXF3rVGQHiR7bpkQ4zB7huuOn8yNnj-elmOkDmp7Mmw1UfGCX_1x99mBrnMHWdCJ57rCCYr2XbWrAw
- type: rouge
value: 12.4127
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODg2MDUxZGU3MThjNzUxY2FiYjE3NTk5ZTdkNTYzMmFmNGVmZGU0NDQwOWQzOWNiZDE5YmViNDZlMzA4OWI2ZCIsInZlcnNpb24iOjF9.VJsGd6PO5iy5p5bE2QS81UXhxocsEnRNswbPcWdMDO9yQ4rU7v0QlucI53bQ7MPGpDThMCfyjpvnu682rQDGDg
- type: loss
value: 4.575754165649414
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmY5NTc1MGY3MmFkNWMyZDlmNTZhOWYyYWRhNGM0MWJkYmVjZGZiODAzYmZmMDk3MjViMDNjYjFlYmJlZDhiYSIsInZlcnNpb24iOjF9.0Tc8MQAwiII5CcQTtb7MO-vZX_KXiBuKy30c4qG5MeRgcLmKMYZcG-zb2MgHr-kthgsHAasDciWOPhzMxMNzDg
- type: gen_len
value: 39.2258
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGM1NTgzMjJlYjMxZjhmM2E1NjIwMDI4YTJkNzI5YjNlMWNiOTQxYTMxNzg3Zjk4YjBiOGU3MGMxNWJkNzUwZSIsInZlcnNpb24iOjF9.xFEROlFxxfmkQDBXQOaCOXApaVoarWTQYTVw8pB8CCRuKSRWljDWuPARSLX79Mv0k3VtHhWNRneW4PakeVyPCA
---
# rut5-base-summ
## Model
Finetuned [ai-forever/ruT5-base](https://huggingface.co/ai-forever/ruT5-base) for text and dialogue summarization.
## Data
- [d0rj/samsum-ru](https://huggingface.co/datasets/d0rj/samsum-ru)
- [IlyaGusev/gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
- [zjkarina/matreshka](https://huggingface.co/datasets/zjkarina/matreshka)
- [rcp-meetings/rudialogsum_v2](https://huggingface.co/datasets/rcp-meetings/rudialogsum_v2)
- [GEM/wiki_lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [mlsum](https://huggingface.co/datasets/mlsum)
All 'train' subsets was concatenated and shuffled with seed `1000 - 7`.
Train subset = 155678 rows.
## Metrics
Evaluation on 10% of concatenated 'validation' subsets = 1458 rows.
See [WandB logs](https://wandb.ai/d0rj/summarization/runs/5mmyskgi).
See report at **REPORT WIP**.
## Notes
> Scheduler, optimizer and trainer states are saved into this repo, so you can use that to continue finetune with your own data with existing gradients.
## Usage
### Summarization pipeline
```python
from transformers import pipeline
pipe = pipeline('summarization', model='d0rj/rut5-base-summ')
pipe(text)
```
### Text-to-text generation
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('d0rj/rut5-base-summ')
model = T5ForConditionalGeneration.from_pretrained('d0rj/rut5-base-summ').eval()
input_ids = tokenizer(text, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
|
TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF | TheBloke | 2023-09-27T13:02:35Z | 664 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"base_model:grimpep/L2-MythoMax22b-instruct-Falseblock",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-09-05T02:36:36Z | ---
license: llama2
tags:
- llama
- llama-2
model_name: L2 MythoMax 22B Instruct Falseblock
inference: false
model_creator: grimpep
model_link: https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock
model_type: llama
quantized_by: TheBloke
base_model: grimpep/L2-MythoMax22b-instruct-Falseblock
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# L2 MythoMax 22B Instruct Falseblock - GGUF
- Model creator: [grimpep](https://huggingface.co/grimpep)
- Original model: [L2 MythoMax 22B Instruct Falseblock](https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock)
## Description
This repo contains GGUF format model files for [grimpep's L2 MythoMax 22B Instruct Falseblock](https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock).
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
Here are a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGML)
* [grimpep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/grimpep/L2-MythoMax22b-instruct-Falseblock)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [l2-mythomax22b-instruct-Falseblock.Q2_K.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q2_K.gguf) | Q2_K | 2 | 9.08 GB| 11.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [l2-mythomax22b-instruct-Falseblock.Q3_K_S.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q3_K_S.gguf) | Q3_K_S | 3 | 9.47 GB| 11.97 GB | very small, high quality loss |
| [l2-mythomax22b-instruct-Falseblock.Q3_K_M.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q3_K_M.gguf) | Q3_K_M | 3 | 10.61 GB| 13.11 GB | very small, high quality loss |
| [l2-mythomax22b-instruct-Falseblock.Q3_K_L.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q3_K_L.gguf) | Q3_K_L | 3 | 11.61 GB| 14.11 GB | small, substantial quality loss |
| [l2-mythomax22b-instruct-Falseblock.Q4_0.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q4_0.gguf) | Q4_0 | 4 | 12.34 GB| 14.84 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [l2-mythomax22b-instruct-Falseblock.Q4_K_S.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q4_K_S.gguf) | Q4_K_S | 4 | 12.42 GB| 14.92 GB | small, greater quality loss |
| [l2-mythomax22b-instruct-Falseblock.Q4_K_M.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q4_K_M.gguf) | Q4_K_M | 4 | 13.18 GB| 15.68 GB | medium, balanced quality - recommended |
| [l2-mythomax22b-instruct-Falseblock.Q5_0.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q5_0.gguf) | Q5_0 | 5 | 15.04 GB| 17.54 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [l2-mythomax22b-instruct-Falseblock.Q5_K_S.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q5_K_S.gguf) | Q5_K_S | 5 | 15.04 GB| 17.54 GB | large, low quality loss - recommended |
| [l2-mythomax22b-instruct-Falseblock.Q5_K_M.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q5_K_M.gguf) | Q5_K_M | 5 | 15.47 GB| 17.97 GB | large, very low quality loss - recommended |
| [l2-mythomax22b-instruct-Falseblock.Q6_K.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q6_K.gguf) | Q6_K | 6 | 17.91 GB| 20.41 GB | very large, extremely low quality loss |
| [l2-mythomax22b-instruct-Falseblock.Q8_0.gguf](https://huggingface.co/TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF/blob/main/l2-mythomax22b-instruct-Falseblock.Q8_0.gguf) | Q8_0 | 8 | 23.19 GB| 25.69 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
```
./main -t 10 -ngl 32 -m l2-mythomax22b-instruct-Falseblock.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/L2-MythoMax22b-Instruct-Falseblock-GGUF", model_file="l2-mythomax22b-instruct-Falseblock.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: grimpep's L2 MythoMax 22B Instruct Falseblock
[Experimental model]
This model is an experiment using the frankenstein script from
https://huggingface.co/chargoddard/llama2-22b
BLOCK_DIAGONAL = False
Using:
https://huggingface.co/Gryphe/MythoMax-L2-13b
+
Then used https://huggingface.co/upstage/llama-30b-instruct-2048
as donor model.
It used 160GB of system ram to merge these models, they merge fast without swap.
For prompt template and model information see [MythoMax-L2](https://huggingface.co/Gryphe/MythoMax-L2-13b).
<!-- original-model-card end -->
|
ChrisWilson011016/5E55KYkBUJk89yY193DHNkDu99ieitjuoPsJ6s2aH71mjpov_vgg | ChrisWilson011016 | 2024-03-04T18:59:44Z | 664 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-29T12:56:27Z | Entry not found |
RichardErkhov/lgodwangl_-_new_01m-gguf | RichardErkhov | 2024-05-02T10:23:53Z | 664 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-02T08:52:23Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
new_01m - GGUF
- Model creator: https://huggingface.co/lgodwangl/
- Original model: https://huggingface.co/lgodwangl/new_01m/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [new_01m.Q2_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q2_K.gguf) | Q2_K | 2.53GB |
| [new_01m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [new_01m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [new_01m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [new_01m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [new_01m.Q3_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K.gguf) | Q3_K | 3.28GB |
| [new_01m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [new_01m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [new_01m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [new_01m.Q4_0.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_0.gguf) | Q4_0 | 3.83GB |
| [new_01m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [new_01m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [new_01m.Q4_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_K.gguf) | Q4_K | 4.07GB |
| [new_01m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [new_01m.Q4_1.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_1.gguf) | Q4_1 | 4.24GB |
| [new_01m.Q5_0.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_0.gguf) | Q5_0 | 4.65GB |
| [new_01m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [new_01m.Q5_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_K.gguf) | Q5_K | 4.78GB |
| [new_01m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [new_01m.Q5_1.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_1.gguf) | Q5_1 | 5.07GB |
| [new_01m.Q6_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Metin/LLaMA-3-8B-Instruct-TR-DPO | Metin | 2024-06-16T09:48:43Z | 664 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"tr",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-19T09:53:16Z | ---
license: llama3
language:
- tr
pipeline_tag: text-generation
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: LLaMA-3-8B-Instruct-TR-DPO
results:
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: MMLU_TR_V0.2
metrics:
- name: 5-shot
type: 5-shot
value: 0.4983
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: Truthful_QA_V0.2
metrics:
- name: 0-shot
type: 0-shot
value: 0.5232
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: ARC_TR_V0.2
metrics:
- name: 25-shot
type: 25-shot
value: 0.4437
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: HellaSwag_TR_V0.2
metrics:
- name: 10-shot
type: 10-shot
value: 0.4558
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: GSM8K_TR_V0.2
metrics:
- name: 5-shot
type: 5-shot
value: 0.5421
verified: false
- task:
type: multiple-choice
dataset:
type: multiple-choice
name: Winogrande_TR_V0.2
metrics:
- name: 5-shot
type: 5-shot
value: 0.5506
verified: false
---
<img src="https://huggingface.co/Metin/LLaMA-3-8B-Instruct-TR-DPO/resolve/main/llama.png"
alt="A Llama reading a book" width="420"/>
# LLaMA-3-8B-Instruct-TR-DPO
LLaMA-3-8B-Instruct-TR-DPO is a finetuned version of [Meta-LLaMA-3-8B-Instruct](https://huggingface.co/meta-llama/meta-llama-3-8b-instruct), which has been trained on a preference dataset generated synthetically.
## Training Info
- **Base Model**: [Meta-LLaMA-3-8B-Instruct](https://huggingface.co/meta-llama/meta-llama-3-8b-instruct)
- **Training Data**: A synthetically generated preference dataset consisting of 10K samples was used. No proprietary data was utilized.
- **Training Time**: 3 hours on a single RTX 6000 Ada
- **QLoRA Configs**:
- lora_r: 64
- lora_alpha: 32
- lora_dropout: 0.05
- lora_target_linear: true
The aim was to finetune the model to enhance the output format and content quality for the Turkish language. It is not necessarily smarter than the base model, but its outputs are more likable and preferable.
Compared to the base model, LLaMA-3-8B-Instruct-TR-DPO is more fluent and coherent in Turkish. It can generate more informative and detailed answers for a given instruction.
It should be noted that the model will still generate incorrect or nonsensical outputs, so please verify the outputs before using them.
## How to use
You can use the below code snippet to use the model:
```python
from transformers import BitsAndBytesConfig
import transformers
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_id = "Metin/LLaMA-3-8B-Instruct-TR-DPO"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16 ,'quantization_config': bnb_config},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant. Always reply in Turkish."},
{"role": "user", "content": "Python'da bir öğenin bir listede geçip geçmediğini nasıl kontrol edebilirim?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.2,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## OpenLLMTurkishLeaderboard_v0.2 benchmark results
- **MMLU_TR_V0.2**: 49.83%
- **Truthful_QA_TR_V0.2**: 52.32%
- **ARC_TR_V0.2**: 44.37%
- **HellaSwag_TR_V0.2**: 45.58%
- **GSM8K_TR_V0.2**: 54.21%
- **Winogrande_TR_V0.2**: 55.06%
- **Average**: 50.22%
These scores may differ from what you will get when you run the same benchmarks, as I did not use any inference engine (vLLM, TensorRT-LLM, etc.)
## Output Example (DPO Model vs Base Model)
Testing the model with a single example is not an accurate method. However, an example is provided here to showcase the model's capabilities.
### Model: LLaMA-3-8B-Instruct-TR-DPO
#### Input
```plaintext
Yeni bir derin öğrenme modelinin tanıtımı için bir etkinlik planı hazırla.
```
#### Output
```plaintext
Harika bir fikir!
Etkinlik Planı: "Deep Learning Modeli Tanıtımı"
**Adım 1: Ön-Başlangıç**
* Konuşma konusu: "Deep Learning Modeli Tanıtımı"
* Katılımcı profili: Bilimsel ve teknoloji odaklı profesyoneller, öğrenciler ve araştırmacılar
* Etkinlik süresi: 2 saat
**Adım 2: Giriş**
* Sunum: "Deep Learning Modeli Tanıtımı: Tarihçesi ve Temelleri"
* Sunum içeriği: Deep learning modelinin tarihi, temel kavramları (örneğin, ANN, CNN, RNN), uygulamaları ve avantajları
* Sunum süresi: 30 dakika
**Adım 3: Modelin Çalışma Örnekleri**
* Sunum: "Deep Learning Modelinin Çalışma Örnekleri"
* Sunum içeriği: Modelin uygulamaları (örneğin, görüntü sınıflandırma, doğal dil işleme, makine öğrenimi)
* Sunum süresi: 45 dakika
**Adım 4: Katılımcı Soruları ve Tartışma**
* Katılımcı sorularına cevap verme
* Tartışma: Modelin avantajları ve dezavantajları, uygulamaları ve gelecekteki gelişmeleri
**Adım 5: Sonuç**
* Sunum: "Deep Learning Modelinin Sonuçları"
* Sunum içeriği: Modelin sonuçları, uygulamaları ve gelecekteki gelişmeleri
* Sunum süresi: 15 dakika
**Etkinlik Notları**
* Sunumlar ve tartışmalar için gerekli malzemeleri hazırlayın
* Katılımcıların sorularına cevap verme için yeterli zaman ayırın
* Etkinlik sonrası katılımcıların geri bildirimlerini alın
Umarım bu etkinlik planı sizin için yardımcı olur!
```
### Model: LLaMA-3-8B-Instruct
#### Input
```plaintext
Harika bir fikir! Aşağıdaki etkinlik planını hazırladım:
**Etkinlik Adı:** "Derin Öğrenme Modelinin Tanıtımı"
**Etkinlik Amacı:** Derin öğrenme modelinin tanıtımı ve uygulamaları hakkında bilgi sahibi olmak.
**Etkinlik Planı:**
1. **Giriş:** Derin öğrenme modelinin tanıtımı ve önemi hakkında kısa bir konuşma.
2. **Teorik Bölüm:** Derin öğrenme modelinin teorik temelleri ve çalışma prensipleri hakkında sunum.
3. **Uygulama Bölüm:** Derin öğrenme modelinin uygulamaları ve başarıları hakkında sunum.
4. **Pratik Bölüm:** Derin öğrenme modelinin pratik uygulamaları ve deneyimi hakkında sunum.
5. **Soru-Cevap Bölüm:** Derin öğrenme modelinin sorularına cevap verme.
6. **Sonuç Bölüm:** Derin öğrenme modelinin tanıtımı ve uygulamaları hakkında sonuç ve tavsiye.
**Etkinlik Malzemeleri:**
* Sunum araçları (slaj, powerpoint, vs.)
* Derin öğrenme modelinin uygulamaları hakkında örnekler ve grafikler
* Derin öğrenme modelinin teorik temelleri hakkında kitap ve makaleler
* Derin öğrenme modelinin pratik uygulamaları hakkında deneyimi hakkında sunum
**Etkinlik Süresi:** 2 saat
**Katılımcı Sayısı:** 20-30 kişi
**Etkinlik Yeri:** Bilimsel toplantı salonu veya online platform
**Etkinlik Tarih ve Saati:** 10:00-12:00
**Etkinlik Dili:** Türkçe
**Etkinlik Katılımcıları:** Bilimsel toplantı salonunda veya online platformda katılımcılar.
**Etkinlik Sonuçları:** Derin öğrenme modelinin tanıtımı ve uygulamaları hakkında bilgi sahibi olmak.
**Etkinlik Tavsiyesi:** Derin öğrenme modelinin tanıtımı ve uygulamaları hakkında bilgi sahibi olmak.
```
|
bartowski/speechless-instruct-mistral-7b-v0.2-GGUF | bartowski | 2024-05-22T15:12:40Z | 664 | 0 | transformers | [
"transformers",
"gguf",
"code",
"text-generation",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:TokenBender/python_eval_instruct_51k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:56:51Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
- TokenBender/python_eval_instruct_51k
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value:
verified: false
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of speechless-instruct-mistral-7b-v0.2
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization.
Original model: https://huggingface.co/uukuguy/speechless-instruct-mistral-7b-v0.2
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<s> [INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST] </s>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [speechless-instruct-mistral-7b-v0.2-Q8_0.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [speechless-instruct-mistral-7b-v0.2-Q6_K.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [speechless-instruct-mistral-7b-v0.2-Q5_K_M.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. |
| [speechless-instruct-mistral-7b-v0.2-Q5_K_S.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, *recommended*. |
| [speechless-instruct-mistral-7b-v0.2-Q4_K_M.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [speechless-instruct-mistral-7b-v0.2-Q4_K_S.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with more space savings, *recommended*. |
| [speechless-instruct-mistral-7b-v0.2-IQ4_NL.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ4_NL.gguf) | IQ4_NL | 4.12GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [speechless-instruct-mistral-7b-v0.2-IQ4_XS.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ4_XS.gguf) | IQ4_XS | 3.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [speechless-instruct-mistral-7b-v0.2-Q3_K_L.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [speechless-instruct-mistral-7b-v0.2-Q3_K_M.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [speechless-instruct-mistral-7b-v0.2-IQ3_M.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [speechless-instruct-mistral-7b-v0.2-IQ3_S.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [speechless-instruct-mistral-7b-v0.2-Q3_K_S.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [speechless-instruct-mistral-7b-v0.2-IQ3_XS.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ3_XS.gguf) | IQ3_XS | 3.01GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [speechless-instruct-mistral-7b-v0.2-IQ3_XXS.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ3_XXS.gguf) | IQ3_XXS | 2.82GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [speechless-instruct-mistral-7b-v0.2-Q2_K.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-Q2_K.gguf) | Q2_K | 2.71GB | Very low quality but surprisingly usable. |
| [speechless-instruct-mistral-7b-v0.2-IQ2_M.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ2_M.gguf) | IQ2_M | 2.50GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [speechless-instruct-mistral-7b-v0.2-IQ2_S.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ2_S.gguf) | IQ2_S | 2.31GB | Very low quality, uses SOTA techniques to be usable. |
| [speechless-instruct-mistral-7b-v0.2-IQ2_XS.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ2_XS.gguf) | IQ2_XS | 2.19GB | Very low quality, uses SOTA techniques to be usable. |
| [speechless-instruct-mistral-7b-v0.2-IQ2_XXS.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ2_XXS.gguf) | IQ2_XXS | 1.99GB | Lower quality, uses SOTA techniques to be usable. |
| [speechless-instruct-mistral-7b-v0.2-IQ1_M.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ1_M.gguf) | IQ1_M | 1.75GB | Extremely low quality, *not* recommended. |
| [speechless-instruct-mistral-7b-v0.2-IQ1_S.gguf](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-GGUF/blob/main/speechless-instruct-mistral-7b-v0.2-IQ1_S.gguf) | IQ1_S | 1.61GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/speechless-instruct-mistral-7b-v0.2-GGUF --include "speechless-instruct-mistral-7b-v0.2-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/speechless-instruct-mistral-7b-v0.2-GGUF --include "speechless-instruct-mistral-7b-v0.2-Q8_0.gguf/*" --local-dir speechless-instruct-mistral-7b-v0.2-Q8_0
```
You can either specify a new local-dir (speechless-instruct-mistral-7b-v0.2-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
togethercomputer/m2-bert-80M-2k-retrieval | togethercomputer | 2024-01-12T21:47:38Z | 663 | 14 | transformers | [
"transformers",
"pytorch",
"m2_bert",
"text-classification",
"sentence-similarity",
"custom_code",
"en",
"arxiv:2310.12109",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | sentence-similarity | 2023-11-13T19:53:03Z | ---
license: apache-2.0
language:
- en
pipeline_tag: sentence-similarity
inference: false
---
# Monarch Mixer-BERT
An 80M checkpoint of M2-BERT, pretrained with sequence length 2048, and it has been fine-tuned for long-context retrieval.
Check out the paper [Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture](https://arxiv.org/abs/2310.12109) and our [blog post]() on retrieval for more on how we trained this model for long sequence.
This model was trained by Jon Saad-Falcon, Dan Fu, and Simran Arora.
Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it!
## How to use
You can load this model using Hugging Face `AutoModel`:
```python
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
"togethercomputer/m2-bert-80M-2k-retrieval",
trust_remote_code=True
)
```
You should expect to see a large error message about unused parameters for FlashFFTConv.
If you'd like to load the model with FlashFFTConv, you can check out our [GitHub](https://github.com/HazyResearch/m2/tree/main).
This model generates embeddings for retrieval. The embeddings have a dimensionality of 768:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
max_seq_length = 2048
testing_string = "Every morning, I make a cup of coffee to start my day."
model = AutoModelForSequenceClassification.from_pretrained(
"togethercomputer/m2-bert-80M-2k-retrieval",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"bert-base-uncased",
model_max_length=max_seq_length
)
input_ids = tokenizer(
[testing_string],
return_tensors="pt",
padding="max_length",
return_token_type_ids=False,
truncation=True,
max_length=max_seq_length
)
outputs = model(**input_ids)
embeddings = outputs['sentence_embedding']
```
You can also get embeddings from this model using the Together API as follows (you can find your API key [here](https://api.together.xyz/settings/api-keys)):
```python
import os
import requests
def generate_together_embeddings(text: str, model_api_string: str, api_key: str):
url = "https://api.together.xyz/api/v1/embeddings"
headers = {
"accept": "application/json",
"content-type": "application/json",
"Authorization": f"Bearer {api_key}"
}
session = requests.Session()
response = session.post(
url,
headers=headers,
json={
"input": text,
"model": model_api_string
}
)
if response.status_code != 200:
raise ValueError(f"Request failed with status code {response.status_code}: {response.text}")
return response.json()['data'][0]['embedding']
print(generate_together_embeddings(
'Hello world',
'togethercomputer/m2-bert-80M-2k-retrieval',
os.environ['TOGETHER_API_KEY'])[:10]
)
```
## Acknowledgments
Alycia Lee helped with AutoModel support.
## Citation
If you use this model, or otherwise found our work valuable, you can cite us as follows:
```
@inproceedings{fu2023monarch,
title={Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture},
author={Fu, Daniel Y and Arora, Simran and Grogan, Jessica and Johnson, Isys and Eyuboglu, Sabri and Thomas, Armin W and Spector, Benjamin and Poli, Michael and Rudra, Atri and R{\'e}, Christopher},
booktitle={Advances in Neural Information Processing Systems},
year={2023}
}
``` |
tsunemoto/LiteLlama-460M-1T-GGUF | tsunemoto | 2024-01-07T22:43:22Z | 663 | 5 | null | [
"gguf",
"GGUF",
"en",
"region:us"
] | null | 2024-01-07T22:41:51Z | ---
title: "LiteLlama-460M-1T Quantized in GGUF"
tags:
- GGUF
language: en
---

# Tsunemoto GGUF's of LiteLlama-460M-1T
This is a GGUF quantization of LiteLlama-460M-1T.
## Original Repo Link:
[Original Repository](https://huggingface.co/ahxt/LiteLlama-460M-1T)
## Original Model Card:
---
# LiteLlama: Reduced-Scale Llama
In this series of repos, we present an open-source reproduction of Meta AI's [LLaMa 2](https://ai.meta.com/llama/). However, with significantly reduced model sizes, [LiteLlama-460M-1T](https://huggingface.co/ahxt/LiteLlama-460M-1T) has 460M parameters trained with 1T tokens.
## Dataset and Tokenization
We train our models on part of [RedPajama](https://www.together.xyz/blog/redpajama) dataset. We use the [GPT2Tokenizer](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/gpt2#transformers.GPT2Tokenizer) to tokenize the text.
## Training Details
The model was trained with ~1T tokens (0.98T). num of tokens = steps*length*batch_size=499679*1024*192=98240888832≈0.98T.
The training curve is at this [WandB project](https://wandb.ai/ahxt/llama2_xs_460M_training_loss/reports/reduced_train_loss-23-09-05-20-25-43---Vmlldzo1MzIwNDUx?accessToken=x2ch3n30jo77p1x8y7q9js4h4d8zpjtz1tzot4xxullyefixp4jwt7au2q37k2q6).
### Using with HuggingFace Transformers
The experimental checkpoints can be directly loaded by [Transformers](https://huggingface.co/transformers/) library. The following code snippet shows how to load the our experimental model and generate text with it.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = 'ahxt/LiteLlama-460M-1T'
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
prompt = 'Q: What is the largest bird?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
tokens = model.generate(input_ids, max_length=20)
print( tokenizer.decode(tokens[0].tolist(), skip_special_tokens=True) )
# Q: What is the largest bird?\nA: The largest bird is a black-headed gull.
```
## Evaluation
### We evaluate our models on the MMLU task.
| Models | #parameters |zero-shot | 5-shot |
| --- | --- | --- | --- |
| llama | 7B | 28.46 | 35.05 |
| openllama | 3B | 24.90 | 26.71 |
|TinyLlama-1.1B-step-50K-105b | 1.1B | 19.00 | 26.53 |
| LiteLlama-460M-1T | 0.46B | 21.13 | 26.39 |
### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ahxt__llama2_xs_460M_experimental)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 26.65 |
| ARC (25-shot) | 24.91 |
| HellaSwag (10-shot) | 38.47 |
| MMLU (5-shot) | 26.17 |
| TruthfulQA (0-shot) | 41.59 |
| Winogrande (5-shot) | 49.88 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 5.51 |
## Contact
This model is developed by [Xiaotian Han](https://ahxt.github.io/) from Texas A&M University and released under MIT License.
|
predibase/customer_support | predibase | 2024-02-26T19:46:49Z | 663 | 4 | peft | [
"peft",
"safetensors",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] | text-generation | 2024-02-19T19:05:00Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
---
Description: Customer support call classification given call transcript\
Original dataset: https://github.com/cricketclub/gridspace-stanford-harper-valley \
---\
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \
The adapter_category is Topic Identification and the name is Customer Support Automation\
---\
Sample input: Consider the case of a customer contacting the support center.\nThe term "task type" refers to the reason for why the customer contacted support.\n\n### The possible task types are: ### \n- replace card\n- transfer money\n- check balance\n- order checks\n- pay bill\n- reset password\n- schedule appointment\n- get branch hours\n- none of the above\n\nSummarize the issue/question/reason that drove the customer to contact support:\n\n### Transcript: <caller> [noise] <agent> [noise] <caller> [noise] <caller> [noise] hello <caller> hello <agent> hi i'm sorry this this call uh hello this is harper valley national bank my name is dawn how can i help you today <caller> hi <caller> oh okay my name is jennifer brown and i need to check my account balance if i could <caller> [noise] <caller> [noise] [noise] [noise] <agent> what account would you like to check <caller> um <caller> [noise] <caller> <unk> <caller> uhm my savings account <caller> please <caller> <unk> <caller> [noise] <caller> [noise] <caller> oh but the way that you're doing <agent> one moment <caller> hello <agent> yeah one moment <caller> uh huh <caller> no problem <caller> [noise] <agent> your account balance is eighty two dollars is there anything else i can help you with <caller> no i don't think so thank you so much you were very helpful <agent> thank you <caller> have a good day bye bye <caller> [noise] <agent> you too \n\n### Task Type:\n\ntest_transcript = \
---\
Sample output: check balance\
---\
Try using this adapter yourself!
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/customer_support"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
``` |
LupoSun/SE_CEM_BRIDGE | LupoSun | 2024-03-03T11:57:47Z | 663 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] | text-to-image | 2024-03-03T11:52:50Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
physical model, arch bridge, concrete, site, timber, wire trees, dark
acrylic, reflection, studio lighting, SETLKT <lora:SE_CEM_BRIDGE:0.8>
output:
url: images/BAI.png
- text: >-
physical model, suspension bridge, concrete, site, timber, wire trees, dark
acrylic, reflection, studio lighting, SETLKT <lora:SE_CEM_BRIDGE:0.8>
output:
url: images/AAI.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: >-
physical model, arch bridge, suspension bridge, concrete, site, timber, wire
trees, dark acrylic, reflection, studio lighting, SETLKT
license: mit
---
# SE_CEM_BRIDGE
<Gallery />
## Model description
Based on Research From: Case Study 1 of the Master's Thesis, "Structural Embodiment - Unified Workflow and Toolkit for Form-finding, Materialisation and Visualisation via Deep Learning Methods" by Tao Sun, conducted under the professorship of Structural Design and Chair of Architectural Informatics at the Technical University of Munich.
This LoRA model is specifically trained on a dataset comprising 100 varied renderings produced during the first case study of the aforementioned thesis. These renderings serve as the foundational dataset, facilitating the model's ability to generate model-like views of bridge structures with high fidelity.
Optimal Settings:
LoRA weight: 0.8
Depth ControlNet Unit Weight: 0.6
Canny ControlNet Unit Weight: 0.3
Utilising both Depth and Canny ControlNet Units simultaneously with the specified weights enhances the model's effectiveness, producing detailed and context-aware visualisations of bridge structures.

## Trigger words
You should use `physical model` to trigger the image generation.
You should use `arch bridge` to trigger the image generation.
You should use `suspension bridge` to trigger the image generation.
You should use `concrete` to trigger the image generation.
You should use `site` to trigger the image generation.
You should use `timber` to trigger the image generation.
You should use `wire trees` to trigger the image generation.
You should use `dark acrylic` to trigger the image generation.
You should use `reflection` to trigger the image generation.
You should use `studio lighting` to trigger the image generation.
You should use `SETLKT` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LupoSun/SE_CEM_BRIDGE/tree/main) them in the Files & versions tab.
|
sergeyzh/LaBSE-ru-sts | sergeyzh | 2024-03-31T16:12:09Z | 663 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"pretraining",
"russian",
"embeddings",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"base_model:cointegrated/LaBSE-en-ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-03-24T19:42:52Z | ---
language:
- ru
pipeline_tag: sentence-similarity
tags:
- russian
- pretraining
- embeddings
- feature-extraction
- sentence-similarity
- sentence-transformers
- transformers
license: mit
base_model: cointegrated/LaBSE-en-ru
---
## Базовый Bert для Semantic text similarity (STS) на GPU
Качественная модель BERT для расчетов эмбедингов предложений на русском языке. Модель основана на [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru) - имеет аналогичные размеры контекста (512), ембединга (768) и быстродействие. Является второй и лучшей по качеству моделью в серии BERT-STS.
На STS и близких задачах (PI, NLI, SA, TI) для русского языка конкурирует по качеству с моделью [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) (но потребляет на 77% меньше памяти и быстрее на 80%).
## Выбор модели из серии BERT-STS (качество/скорость)
| Рекомендуемая модель | CPU <br> (STS; snt/s) | GPU <br> (STS; snt/s) |
|:---------------------------------|:---------:|:---------:|
| Быстрая модель (скорость) | [rubert-tiny-sts](https://huggingface.co/sergeyzh/rubert-tiny-sts) <br> (0.797; 1190) | - |
| Базовая модель (качество) | [rubert-mini-sts](https://huggingface.co/sergeyzh/rubert-mini-sts) <br> (0.815; 539) | **LaBSE-ru-sts <br> (0.845; 1894)** |
## Лучшая модель для использования в составе RAG LLMs при инференсе на GPU:
- высокое качество при нечетких запросах (отличный метрики на задачах STS, PI, NLI);
- низкое влияение эмоциональной окраски текста на ембединг (средние показатели на задачах SA, TI);
- легкое расширение базы текстовых документов (скорость работы на GPU > 1k предложений в секунду);
- ускорение алгоритмов knn при поиске соответствий (пониженная размерность эмбединга 768);
- простота использования (совместимость с [SentenceTransformer](https://github.com/UKPLab/sentence-transformers)).
## Использование модели с библиотекой `transformers`:
```python
# pip install transformers sentencepiece
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sergeyzh/LaBSE-ru-sts")
model = AutoModel.from_pretrained("sergeyzh/LaBSE-ru-sts")
# model.cuda() # uncomment it if you have a GPU
def embed_bert_cls(text, model, tokenizer):
t = tokenizer(text, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**{k: v.to(model.device) for k, v in t.items()})
embeddings = model_output.last_hidden_state[:, 0, :]
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings[0].cpu().numpy()
print(embed_bert_cls('привет мир', model, tokenizer).shape)
# (768,)
```
## Использование с `sentence_transformers`:
```Python
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('sergeyzh/LaBSE-ru-sts')
sentences = ["привет мир", "hello world", "здравствуй вселенная"]
embeddings = model.encode(sentences)
print(util.dot_score(embeddings, embeddings))
```
## Метрики
Оценки модели на бенчмарке [encodechka](https://github.com/avidale/encodechka):
| Модель | STS | PI | NLI | SA | TI |
|:---------------------------------|:---------:|:---------:|:---------:|:---------:|:---------:|
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.862 | 0.727 | 0.473 | 0.810 | 0.979 |
| **sergeyzh/LaBSE-ru-sts** | **0.845** | **0.737** | **0.481** | **0.805** | **0.957** |
| [sergeyzh/rubert-mini-sts](https://huggingface.co/sergeyzh/rubert-mini-sts) | 0.815 | 0.723 | 0.477 | 0.791 | 0.949 |
| [sergeyzh/rubert-tiny-sts](https://huggingface.co/sergeyzh/rubert-tiny-sts) | 0.797 | 0.702 | 0.453 | 0.778 | 0.946 |
| [Tochka-AI/ruRoPEBert-e5-base-512](https://huggingface.co/Tochka-AI/ruRoPEBert-e5-base-512) | 0.793 | 0.704 | 0.457 | 0.803 | 0.970 |
| [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru) | 0.794 | 0.659 | 0.431 | 0.761 | 0.946 |
| [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) | 0.750 | 0.651 | 0.417 | 0.737 | 0.937 |
**Задачи:**
- Semantic text similarity (**STS**);
- Paraphrase identification (**PI**);
- Natural language inference (**NLI**);
- Sentiment analysis (**SA**);
- Toxicity identification (**TI**).
## Быстродействие и размеры
На бенчмарке [encodechka](https://github.com/avidale/encodechka):
| Модель | CPU | GPU | size | dim | n_ctx | n_vocab |
|:---------------------------------|----------:|----------:|----------:|----------:|----------:|----------:|
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 149.026 | 15.629 | 2136 | 1024 | 514 | 250002 |
| **sergeyzh/LaBSE-ru-sts** |**42.835** | **8.561** | **490** | **768** | **512** | **55083** |
| [sergeyzh/rubert-mini-sts](https://huggingface.co/sergeyzh/rubert-mini-sts) | 6.417 | 5.517 | 123 | 312 | 2048 | 83828 |
| [sergeyzh/rubert-tiny-sts](https://huggingface.co/sergeyzh/rubert-tiny-sts) | 3.208 | 3.379 | 111 | 312 | 2048 | 83828 |
| [Tochka-AI/ruRoPEBert-e5-base-512](https://huggingface.co/Tochka-AI/ruRoPEBert-e5-base-512) | 43.314 | 9.338 | 532 | 768 | 512 | 69382 |
| [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru) | 42.867 | 8.549 | 490 | 768 | 512 | 55083 |
| [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) | 3.212 | 3.384 | 111 | 312 | 2048 | 83828 |
При использовании батчей с `sentence_transformers`:
```python
from sentence_transformers import SentenceTransformer
model_name = 'sergeyzh/LaBSE-ru-sts'
model = SentenceTransformer(model_name, device='cpu')
sentences = ["Тест быстродействия на CPU Ryzen 7 3800X: batch = 50"] * 50
%timeit -n 5 -r 3 model.encode(sentences)
# 882 ms ± 104 ms per loop (mean ± std. dev. of 3 runs, 5 loops each)
# 50/0.882 = 57 snt/s
model = SentenceTransformer(model_name, device='cuda')
sentences = ["Тест быстродействия на GPU RTX 3060: batch = 1500"] * 1500
%timeit -n 5 -r 3 model.encode(sentences)
# 792 ms ± 29 ms per loop (mean ± std. dev. of 3 runs, 5 loops each)
# 1500/0.792 = 1894 snt/s
```
## Связанные ресурсы
Вопросы использования модели обсуждаются в [русскоязычном чате NLP](https://t.me/natural_language_processing).
|
allknowingroger/StarlingMaxLimmy-7B-slerp | allknowingroger | 2024-04-08T08:52:25Z | 663 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/AutoLimmy-7B-slerp",
"allknowingroger/StarlingDolphin-7B-slerp",
"base_model:allknowingroger/AutoLimmy-7B-slerp",
"base_model:allknowingroger/StarlingDolphin-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T08:42:29Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/AutoLimmy-7B-slerp
- allknowingroger/StarlingDolphin-7B-slerp
base_model:
- allknowingroger/AutoLimmy-7B-slerp
- allknowingroger/StarlingDolphin-7B-slerp
license: apache-2.0
---
# StarlingMaxLimmy-7B-slerp
StarlingMaxLimmy-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/AutoLimmy-7B-slerp](https://huggingface.co/allknowingroger/AutoLimmy-7B-slerp)
* [allknowingroger/StarlingDolphin-7B-slerp](https://huggingface.co/allknowingroger/StarlingDolphin-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/AutoLimmy-7B-slerp
layer_range: [0, 32]
- model: allknowingroger/StarlingDolphin-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: allknowingroger/AutoLimmy-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/StarlingMaxLimmy-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF | mradermacher | 2024-05-29T00:59:08Z | 663 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Contextual-Obedient-MoE-3x8B-Llama3-RAG",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T04:58:07Z | ---
base_model: TroyDoesAI/Contextual-Obedient-MoE-3x8B-Llama3-RAG
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TroyDoesAI/Contextual-Obedient-MoE-3x8B-Llama3-RAG
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ1_S.gguf) | i1-IQ1_S | 4.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ1_M.gguf) | i1-IQ1_M | 4.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ2_S.gguf) | i1-IQ2_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ2_M.gguf) | i1-IQ2_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q2_K.gguf) | i1-Q2_K | 7.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ3_S.gguf) | i1-IQ3_S | 8.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ3_M.gguf) | i1-IQ3_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q4_0.gguf) | i1-Q4_0 | 11.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Contextual-Obedient-MoE-3x8B-Llama3-RAG-i1-GGUF/resolve/main/Contextual-Obedient-MoE-3x8B-Llama3-RAG.i1-Q6_K.gguf) | i1-Q6_K | 15.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
huggingartists/big-russian-boss | huggingartists | 2021-09-15T16:41:55Z | 662 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/big-russian-boss",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/big-russian-boss
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d66eeeef006738708df1e52b84c34c14.403x403x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Big Russian Boss</div>
<a href="https://genius.com/artists/big-russian-boss">
<div style="text-align: center; font-size: 14px;">@big-russian-boss</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Big Russian Boss.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/big-russian-boss).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/big-russian-boss")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ju9bqqi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Big Russian Boss's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3820n7qx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3820n7qx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/big-russian-boss')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/big-russian-boss")
model = AutoModelWithLMHead.from_pretrained("huggingartists/big-russian-boss")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
hopkins/amr-model | hopkins | 2023-03-30T19:16:49Z | 662 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-03-30T17:17:15Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: amr-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amr-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.07 | 2 | 10.9308 |
| 2.2499 | 17.24 | 500 | 1.2289 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
sail-rvc/XXXTentacion | sail-rvc | 2023-07-14T07:34:12Z | 662 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | 2023-07-14T07:33:56Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# XXXTentacion
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:34:12
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
Aznor/llama-2-chat-7b-q4_0 | Aznor | 2023-08-26T14:23:04Z | 662 | 0 | null | [
"gguf",
"region:us"
] | null | 2023-08-26T14:12:00Z | Entry not found |
malcolmrey/serenity | malcolmrey | 2023-12-12T11:53:55Z | 662 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-24T09:17:56Z | ---
license: mit
language:
- en
library_name: diffusers
tags:
- safetensors
- stable-diffusion
---
# About
This is my custom merge model called Serenity for Stable Diffusion 1.5
Two formats are available:
* safetensors
* diffusers
# Civitai Link
https://civitai.com/models/110426/serenity
# Support
If you feel like supporting my work, here is my coffee page :)
https://www.buymeacoffee.com/malcolmrey
# Samples
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/69abd7aa-45a8-4e84-a0dd-63e2094c93a1/width=1024/149471-943806964-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/69abd7aa-45a8-4e84-a0dd-63e2094c93a1/width=1024/149471-943806964-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/0610bb3e-a75a-4993-a5b8-04f9de377db4/width=1120/sd-1689525321-2502013093-99ca.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/0610bb3e-a75a-4993-a5b8-04f9de377db4/width=1120/sd-1689525321-2502013093-99ca.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cd703faf-f10a-40d1-8dbb-fa2359243237/width=1120/sd-1689525240-827350816-b59c.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cd703faf-f10a-40d1-8dbb-fa2359243237/width=1120/sd-1689525240-827350816-b59c.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bae6338c-401f-4e00-9bef-ff5b080a1497/width=1024/151221-3970928850-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bae6338c-401f-4e00-9bef-ff5b080a1497/width=1024/151221-3970928850-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cd009ca2-17c1-4e83-908a-66331915ac43/width=1024/151223-1982045657-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cd009ca2-17c1-4e83-908a-66331915ac43/width=1024/151223-1982045657-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ac7cba06-dc70-4281-b6ce-447c2e813d89/width=1024/151284-2391586252-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ac7cba06-dc70-4281-b6ce-447c2e813d89/width=1024/151284-2391586252-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8df53e11-e195-46ff-8e11-e5908c4fcf89/width=1024/151256-1674448823-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8df53e11-e195-46ff-8e11-e5908c4fcf89/width=1024/151256-1674448823-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/eed746ba-80e9-4357-ac00-0afadf3b2ca4/width=1024/151281-1817968173-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/eed746ba-80e9-4357-ac00-0afadf3b2ca4/width=1024/151281-1817968173-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/e507b359-8891-4577-9c24-e2d6fa0e3ab2/width=1024/151254-1570201823-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/e507b359-8891-4577-9c24-e2d6fa0e3ab2/width=1024/151254-1570201823-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg)
[<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bbe5d0ea-e926-4267-a798-9131a4ff5676/width=1024/151306-388801004-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg" width="650"/>](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bbe5d0ea-e926-4267-a798-9131a4ff5676/width=1024/151306-388801004-30-DPM++%202M%20Karras-1408-serenity_v1.jpeg) |
maddes8cht/openlm-research-open_llama_7b_v2-gguf | maddes8cht | 2023-11-15T11:40:35Z | 662 | 1 | transformers | [
"transformers",
"gguf",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-14T05:44:50Z | ---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
library_name: transformers
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# open_llama_7b_v2 - GGUF
- Model creator: [openlm-research](https://huggingface.co/openlm-research)
- Original model: [open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2)
OpenLlama is a free reimplementation of the original Llama Model which is licensed under Apache 2 license.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
AI-aktindsigt/gdpr_anonymiseingsmodel_ganm | AI-aktindsigt | 2023-11-30T11:13:01Z | 662 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"arxiv:2110.05679",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-11-29T14:59:02Z | NLP NER-modeller udviklet under AI-aktindsigt projektet er trænet på tre forskellige måder: Enikke-privatbaseline model og to modeller trænet med differential privacy med hhv. ε=8 og ε=1 (se [SmartNoise Whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/microsoft-smartnoisedifferential-privacy-machine-learning-case-studies/SmartNoise%20Whitepaper%20Final%203.8.21.pdf)<sup>1</sup> og [Learning with Privacy at Scale](https://machinelearning.apple.com/research/learning-with-privacy-at-scale)<sup>2</sup> for valg af ε i industrien). δ er sat til 1 over længden af træningsdatasættet (se [LLM CAN BE STRONG DP LEARNERS](https://arxiv.org/pdf/2110.05679.pdf)<sup>3</sup>). Derudover er modellerne trænet med to forskellige sæt af entititer - med og uden entiteterne Forbrydelse og CPR-numre, som dog må anses som præmature for udstilling.
Modellen udstillet fremgå af nedenstående listen af forskellige træningsiterationer under navnet "_sss-ner-dp-1 Special semantisk søgemodel finetunet på NER-annoterede aktindsigter med DP - ε = 1 excl. Forbrydelse og CPR_"
Denne model kan anvendes til at fremsøge entiteter i de 7 følgende kategorier: 1) personer 2) lokationer 3) organisationer 4) helbredsoplysninger 5) adresser 6) kommuner og 7) telefonnumre.
| Model | Beskrivelse |
| --- | --- |
| sss-ner | Special semantisk søgemodel finetunet på NER-annoterede aktindsigter excl. Forbrydelse og CPR |
| sss-ner-dp-8 | Special semantisk søgemodel finetunet på NER-annoterede aktindsigter med DP-ε=8 excl. Forbrydelse og CPR |
| sss-ner-dp-1 | Special semantisk søgemodel finetunet på NER-annoterede aktindsigter med DP-ε=1 excl. Forbrydelse og CPR |
| sss-ner-fc | Special semantisk søgemodel finetunet på NER-annoterede aktindsigter |
| sss-ner-fc-dp-8 | Special semantisk søgemodel finetunet på NER-annoterede aktindsigter med DP-ε= 8 |
| sss-ner-fc-dp-1 | Special semantisk søgemodel finetunet på NER-annoterede aktindsigter med DP-ε= 1 |
Modellerne finetunet til Named Entity Recognition (NER)og er trænet til at forudsige følgende kategorier:
| Tag | Navn | Beskrivelse |
| --- | --- | --- |
| PER | Person | Navn på en person (fx_Kasper__Hansen_eller_Birgitte_) |
| LOK | Lokation | Navn på en lokation (fx_Danmark_eller_Kongens__Have_) |
| ADR | Adresse | Navn på en adresse (fx_Rådhuspladsen, __2400__ København_) |
| HEL | Helbred | Ord relaterede til helbred (fx_hovedpine_eller_OCD_) |
| ORG | Organisation | Navn på enorganisation (fx_Alvenir__Aps_eller_Aktio_) |
| KOM | Kommune | Navn på enkommune (fx_Sønderborg__Kommune_) |
| TEL | Telefonnummer | Telefonnummer (fx_11 __22__ 33 __69_,_11223344_eller_1122__ 3344_) |
| CPR | CPR-nummer | CPR- **høj** **usikkerhed** (fx_01011990__1234_,_01011990-1234_eller_010119901234_) |
| FOR | Forbrydelse | Forbrydelse- **høj** **usikkerhed** (fx_tyveri_,_vold_eller_psykisk__vold_) |
## Datasæt
Modellerne er trænet på 49,191 unikke sætninger, og valideret på 2,359 sætninger, og testet på et data-sæt bestående af 125 entiteter fra hver kategori af entiteter som indgik i træningen. Data er blevet filtreret, opdelt i unikke sætninger og derefter inddelt i trænings-, test- og valideringssæt.
## Eksempel på anvendelse af model
```python
from transformers import pipeline
import pandas as pd
ner = pipeline(task='ner', model='../ner/models/sss-ner/best_model', aggregation_strategy='first')
sentence = 'Gunnar Hjering Nielsen er medarbejder i virksomheden Danske Sprogmodeller ApS og har ofte ekstrem hovedpine.' \
'Han bor på Hjortemarken 31, 2100 København Ø som ligger i Københavns Kommune.' \
'Hans tlf nummer er 12345560 og han er fra Danmark. Hjortemarken er centralt placeret i Københaven.'
result = ner(sentence)
print(pd.DataFrame.from_records(result))
```
Nedenstående tabel viser de forskellige modellers Macro-F1 score.
| Model | Macro score | F1- | PER F1 | LOK F1 | ADR F1 | HEL F1 | ORG F1 | KOM F1 | TEL F1 | CPR F1 | FOR F1 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| sss-ner | 0.86 || 0.95 | 0.77 | 0.90 | 0.63 | 0.77 | 0.96 | 0.92 | NA | NA |
| sss-ner-dp-8 | 0.84 || 0.97 | 0.64 | 0.85 | 0.65 | 0.73 | 0.94 | 0.95 | NA | NA |
| sss-ner-dp-1 | 0.75 || 0.94 | 0.51 | 0.82 | 0.59 | 0.62 | 0.86 | 0.75 | NA | NA |
| sss-ner-fc | 0.80 || 0.96 | 0.70 | 0.88 | 0.43 | 0.70 | 0.97 | 0.90 | 0.55 | 0.95 |
| sss-ner-fc-dp-8 | 0.82 || 0.96 | 0.66 | 0.87 | 0.62 | 0.69 | 0.94 | 0.92 | 0.67 | 0.93 |
| sss-ner-fc-dp-1 | 0.69 || 0.95 | 0.56 | 0.84 | 0.40 | 0.66 | 0.88 | 0.78 | 0.06 | 0.87 |
## Træningsprocedure hyperparametre
Hyperparametre anvendt i træningen
| Model | learning\_rate | train\_batc | eval\_batc | optimizer | | lot\_si | epsilon | delta | num\_epochs |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| sss-ner | 4.21e−5 | 64 | 64 | Adam W med tas=(0.9,0.999)epsilon=1e-08 | be- og | NA | NA | NA | 15 |
| sss-ner-dp-8 | 0.00044 | 16 | 16 | Adam W med tas=(0.9,0.999) epsilon=1e-08 | be-og | 8 | 2.03e | 512 | 15 |
| sss-ner-dp-1 | 0.00044 | 16 | 16 | Adam W med tas=(0.9,0.999) epsilon=1e-08 | be-og | 1 | 2.03e | 512 | 15 |
| sss-ner-fc | 9.65e−5 | 16 | 16 | Adam W med tas=(0.9,0.999) epsilon=1e-08 | be-og | NA | NA | NA | 15 |
| sss-ner-fc-dp-8 | 0.00044 | 16 | 16 | Adam W med tas=(0.9,0.999) epsilon=1e-08 | be-og | 8 | 2.03e | 512 | 15 |
| sss-ner-fc-dp-1 | 0.00044 | 16 | 16 | Adam W med tas=(0.9,0.999) epsilon=1e-08 | be-og | 1 | 2.03e | 512 | 15 |
## Framework versioner
• transformers 4.19.2
• opacus 1.2.0
• datasets 2.2.2
• numpy==1.22.3
• pytorch 1.13.0+cu11
**Fuld dokumentation kan findes på AI aktindsigts Github linket under projektet her på Huggingface.**
[1](#sdfootnote1anc) [https://azure.microsoft.com/mediahandler/files/resourcefiles/microsoft-smartnoisedifferential-privacy-machine-learning-case-studies/SmartNoise%20Whitepaper%20Final%203.8.21.pdf](https://azure.microsoft.com/mediahandler/files/resourcefiles/microsoft-smartnoisedifferential-privacy-machine-learning-case-studies/SmartNoise%20Whitepaper%20Final%203.8.21.pdf)
[2](#sdfootnote2anc) [https://machinelearning.apple.com/research/learning-with-privacy-at-scale](https://machinelearning.apple.com/research/learning-with-privacy-at-scale)
[3](#sdfootnote3anc) [https://arxiv.org/pdf/2110.05679.pdf](https://arxiv.org/pdf/2110.05679.pdf) |
Buseak/sentence_splitter_final_v2 | Buseak | 2024-01-07T11:41:08Z | 662 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"canine",
"token-classification",
"generated_from_trainer",
"base_model:google/canine-s",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-07T11:09:04Z | ---
license: apache-2.0
base_model: google/canine-s
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: sentence_splitter_final_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence_splitter_final_v2
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 0.8
- Recall: 1.0
- F1: 0.8889
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.0037 | 0.0690 | 0.5 | 0.1212 | 0.9988 |
| No log | 2.0 | 190 | 0.0022 | 0.0909 | 1.0 | 0.1667 | 0.9993 |
| No log | 3.0 | 285 | 0.0014 | 0.1333 | 1.0 | 0.2353 | 0.9995 |
| No log | 4.0 | 380 | 0.0010 | 0.1905 | 1.0 | 0.32 | 0.9996 |
| No log | 5.0 | 475 | 0.0008 | 0.25 | 1.0 | 0.4 | 0.9997 |
| 0.0096 | 6.0 | 570 | 0.0004 | 0.3636 | 1.0 | 0.5333 | 0.9998 |
| 0.0096 | 7.0 | 665 | 0.0004 | 0.2222 | 1.0 | 0.3636 | 0.9999 |
| 0.0096 | 8.0 | 760 | 0.0002 | 0.4 | 1.0 | 0.5714 | 0.9999 |
| 0.0096 | 9.0 | 855 | 0.0003 | 0.1905 | 1.0 | 0.32 | 0.9999 |
| 0.0096 | 10.0 | 950 | 0.0003 | 0.2105 | 1.0 | 0.3478 | 0.9999 |
| 0.0008 | 11.0 | 1045 | 0.0001 | 0.3333 | 1.0 | 0.5 | 1.0000 |
| 0.0008 | 12.0 | 1140 | 0.0001 | 0.5 | 1.0 | 0.6667 | 1.0000 |
| 0.0008 | 13.0 | 1235 | 0.0001 | 0.4444 | 1.0 | 0.6154 | 1.0000 |
| 0.0008 | 14.0 | 1330 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0008 | 15.0 | 1425 | 0.0000 | 0.6667 | 1.0 | 0.8 | 1.0000 |
| 0.0003 | 16.0 | 1520 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0003 | 17.0 | 1615 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0003 | 18.0 | 1710 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0003 | 19.0 | 1805 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0003 | 20.0 | 1900 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0003 | 21.0 | 1995 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0001 | 22.0 | 2090 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0001 | 23.0 | 2185 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0001 | 24.0 | 2280 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
| 0.0001 | 25.0 | 2375 | 0.0000 | 0.8 | 1.0 | 0.8889 | 1.0000 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Buseak/spellcorrector_20_02_050_qwerty_v14 | Buseak | 2024-02-25T16:10:34Z | 662 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"canine",
"token-classification",
"generated_from_trainer",
"base_model:Buseak/spellcorrector_20_02_050_qwerty_v11",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-25T13:49:41Z | ---
license: apache-2.0
base_model: Buseak/spellcorrector_20_02_050_qwerty_v11
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: spellcorrector_20_02_050_qwerty_v14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spellcorrector_20_02_050_qwerty_v14
This model is a fine-tuned version of [Buseak/spellcorrector_20_02_050_qwerty_v11](https://huggingface.co/Buseak/spellcorrector_20_02_050_qwerty_v11) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0331 | 1.0 | 967 | 0.0132 | 0.9919 | 0.9893 | 0.9906 | 0.9963 |
| 0.019 | 2.0 | 1934 | 0.0077 | 0.9941 | 0.9936 | 0.9938 | 0.9977 |
| 0.0127 | 3.0 | 2901 | 0.0060 | 0.9973 | 0.9973 | 0.9973 | 0.9982 |
| 0.0102 | 4.0 | 3868 | 0.0035 | 0.9968 | 0.9962 | 0.9965 | 0.9990 |
| 0.0077 | 5.0 | 4835 | 0.0024 | 0.9989 | 0.9989 | 0.9989 | 0.9993 |
| 0.0065 | 6.0 | 5802 | 0.0015 | 0.9989 | 0.9984 | 0.9987 | 0.9996 |
| 0.0047 | 7.0 | 6769 | 0.0012 | 0.9995 | 0.9989 | 0.9992 | 0.9997 |
| 0.0036 | 8.0 | 7736 | 0.0009 | 1.0 | 0.9995 | 0.9997 | 0.9997 |
| 0.0031 | 9.0 | 8703 | 0.0007 | 0.9995 | 0.9995 | 0.9995 | 0.9998 |
| 0.0027 | 10.0 | 9670 | 0.0004 | 1.0 | 1.0 | 1.0 | 0.9999 |
| 0.0021 | 11.0 | 10637 | 0.0003 | 1.0 | 1.0 | 1.0 | 0.9999 |
| 0.0017 | 12.0 | 11604 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.0015 | 13.0 | 12571 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.0011 | 14.0 | 13538 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.001 | 15.0 | 14505 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.0008 | 16.0 | 15472 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.0007 | 17.0 | 16439 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.0006 | 18.0 | 17406 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.0006 | 19.0 | 18373 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.0005 | 20.0 | 19340 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0000 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.