modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ncoop57/bart-base-code-summarizer-java-v0 | ncoop57 | 2020-12-11T21:56:54Z | 497 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
tags:
- summarization
license: mit
---
## ncoop57/bart-base-code-summarizer-java-v0
|
smilegate-ai/kor_unsmile | smilegate-ai | 2022-03-28T01:34:57Z | 497 | 8 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-28T01:03:23Z | Entry not found |
jitesh/emotion-english | jitesh | 2023-05-09T08:30:32Z | 497 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"emotion",
"20 classes",
"code",
"emotions",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-04-12T04:43:49Z | ---
license: mit
language:
- en
pipeline_tag: text-classification
tags:
- emotion
- 20 classes
- code
- emotions
widget:
- text: I'm so angry right now. I can't believe he did that to me.
example_title: anger
- text: I'm feeling disgusted by the smell of this food.
example_title: disgust
- text: I'm feeling very afraid of what might happen next.
example_title: fear
- text: I'm so joyful right now! This is the best day of my life.
example_title: joy
- text: >-
I'm feeling neutral about this situation. I don't really care one way or
another.
example_title: neutral
- text: I'm feeling really sad today after my dog passed away."
example_title: sadness
- text: I'm so surprised by what just happened! I never saw that coming.
example_title: surprise
- text: I'm feeling cheeky today. I'm going to play a little prank on my friend.
example_title: cheeky
- text: I'm feeling confused about what to do next. I need some guidance.
example_title: confuse
- text: I'm feeling curious about the world around me. There's so much to learn!
example_title: curious
- text: I'm feeling empathetic towards my friend who is going through a tough time.
example_title: empathetic
- text: I'm feeling grumpy today. Everything is annoying me!
example_title: grumpy
- text: I'm feeling guilty about what I did. I wish I could take it back.
example_title: guilty
- text: I'm feeling very energetic today. I'm ready to take on the world!
example_title: energetic
- text: I'm feeling impatient waiting for this movie to start.
example_title: impatient
- text: >-
I'm feeling so much love for my family right now. They mean everything to
me.
example_title: love
- text: I'm thinking about my future and what I want to achieve.
example_title: think
- text: >-
I'm feeling serious about this issue. It's important and needs to be
addressed.
example_title: serious
- text: >-
I'm feeling suspicious of what he's telling me. I think he's hiding
something.
example_title: suspicious
- text: I'm feeling whiny today. Everything is bothering me!
example_title: whiny
- text: I love football so much
example_title: love 2
- text: I'm reflecting on my experiences to gain insights
example_title: think 2
- text: >-
I borrowed money from a friend and haven't paid it back yet. Now I feel
ashamed.
example_title: guilty 2
- text: I'm starting to think that he's up to something.
example_title: suspicious 2
- text: We need to approach this matter with a sense of purpose
example_title: serious 2
---
# Emotion classification from 20 classes
## 20 Emotion labels
| id | label |
| --- | ---------- |
| 0 | anger |
| 1 | cheeky |
| 2 | confuse |
| 3 | curious |
| 4 | disgust |
| 5 | empathetic |
| 6 | energetic |
| 7 | fear |
| 8 | grumpy |
| 9 | guilty |
| 10 | impatient |
| 11 | joy |
| 12 | love |
| 13 | neutral |
| 14 | sadness |
| 15 | serious |
| 16 | surprise |
| 17 | suspicious |
| 18 | think |
| 19 | whiny |
## How to use
Here is how to use this model to get the emotion label of a given text:
```python
from transformers import AutoModelForSequenceClassification, pipeline
model_name = 'jitesh/emotion-english'
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = pipeline("text-classification", model=model, tokenizer=model_name)
text = "I can't wait any longer "
prediction = classifier(text)
print(prediction[0], text)
```
The above code outputs the following line.
```bash
{'label': 'impatient', 'score': 0.924211859703064} I can't wait any longer
``` |
CiaraRowles/TemporalNet2 | CiaraRowles | 2023-11-15T19:08:04Z | 497 | 122 | diffusers | [
"diffusers",
"controlnet",
"stable-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
]
| null | 2023-06-07T21:23:21Z | ---
license: openrail
tags:
- controlnet
- stable-diffusion
- diffusers
base_model: runwayml/stable-diffusion-v1-5
---
Introducing TemporalNet2
TemporalNet was a ControlNet model designed to enhance the temporal consistency of generated outputs
TemporalNet 2 is an evolution on the concept, where the generated outputs are guided by both the last frame *and* an optical flow map between the frames, improving generation consistency.
This took some modification of the original controlnet code so you'll have to do some extra things. If you just want to run a gradio example or look at the modified controlnet code,
that's here: https://github.com/CiaraStrawberry/TemporalNet Just drop the model from this directory into that model folder and make sure the gradio_temporalnet.py script points at the model.
To use with stable diffusion, you can either use it with TemporalKit by moving to the branch here after following steps 1 and 2: https://github.com/CiaraStrawberry/TemporalKit/tree/TemporalNet , or use it just by accessing the base api through the temporalvideo.py script:
1) move your controlnet webui install to this branch: https://github.com/CiaraStrawberry/sd-webui-controlnet-TemporalNet-API
2) Add the model to your models folder in the ControlNet extension in Automatic1111's Web UI.
3) Check you have:
- A folder named "Input_Images" with the input frames
- A PNG file called "init.png" that is pre-stylized in your desired style
- The "temporalvideo.py" script
4) Customize the "temporalvideo.py" script according to your preferences, such as the image resolution, prompt, and control net settings.
5) Launch Automatic1111's Web UI with the --api setting enabled.
6) Execute the Python script.
*Please note that the "init.png" image will not significantly influence the style of the output video. Its primary purpose is to prevent a drastic change in aesthetics during the first few frames.*
Also, I highly recommend you use this in conjunction with the hed model, the settings are already in the script.
ToDo:
Write an Extension for the web ui.
Write a feature that automatically generates an "init.png" image if none is provided.
̶C̶h̶a̶n̶g̶e̶ ̶t̶h̶e̶ ̶e̶x̶t̶e̶n̶s̶i̶o̶n̶ ̶t̶o̶ ̶.̶s̶a̶f̶e̶t̶e̶n̶s̶o̶r̶s̶ ̶a̶n̶d̶ ̶i̶n̶v̶e̶s̶t̶i̶g̶a̶t̶e̶ ̶c̶o̶m̶p̶r̶e̶s̶s̶i̶o̶n̶.̶
|
Trelis/Llama-2-7b-chat-hf-sharded-bf16 | Trelis | 2023-07-31T21:42:59Z | 497 | 13 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"sharded",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-07-21T12:48:03Z | ---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- sharded
---
# **llama-2-chat-7b-hf (sharded)**
This is a sharded version of Meta's Llama 2 chat 7b model, specifically the hugging face version.
All details below are copied from the original repo.
Colab notebook for sharding: https://colab.research.google.com/drive/1f1q9qc56wzB_7-bjgNyLlO6f28ui1esQ
Colab notebook for inference: https://colab.research.google.com/drive/1zxwaTSvd6PSHbtyaoa7tfedAS31j_N6m
## Inference with Google Colab and HuggingFace 🤗
Get started by saving your own copy of this [fLlama_Inference notebook](https://colab.research.google.com/drive/1Ow5cQ0JNv-vXsT-apCceH6Na3b4L7JyW?usp=sharing).
You will be able to run inference using a free Colab notebook if you select a gpu runtime. See the notebook for more details.
~
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
TheBloke/Vicuna-7B-CoT-GGUF | TheBloke | 2023-09-27T12:53:24Z | 497 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:1910.09700",
"base_model:kevinpro/Vicuna-7B-CoT",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T02:04:41Z | ---
license: other
model_name: Vicuna 7B CoT
base_model: kevinpro/Vicuna-7B-CoT
inference: false
model_creator: Shuaijie She
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vicuna 7B CoT - GGUF
- Model creator: [Shuaijie She](https://huggingface.co/kevinpro)
- Original model: [Vicuna 7B CoT](https://huggingface.co/kevinpro/Vicuna-7B-CoT)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/kevinpro/Vicuna-7B-CoT).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vicuna-7B-CoT-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF)
* [Shuaijie She's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [vicuna-7b-cot.Q2_K.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [vicuna-7b-cot.Q3_K_S.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [vicuna-7b-cot.Q3_K_M.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [vicuna-7b-cot.Q3_K_L.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [vicuna-7b-cot.Q4_0.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vicuna-7b-cot.Q4_K_S.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [vicuna-7b-cot.Q4_K_M.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [vicuna-7b-cot.Q5_0.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vicuna-7b-cot.Q5_K_S.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [vicuna-7b-cot.Q5_K_M.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [vicuna-7b-cot.Q6_K.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [vicuna-7b-cot.Q8_0.gguf](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGUF/blob/main/vicuna-7b-cot.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Vicuna-7B-CoT-GGUF and below it, a specific filename to download, such as: vicuna-7b-cot.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Vicuna-7B-CoT-GGUF vicuna-7b-cot.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Vicuna-7B-CoT-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Vicuna-7B-CoT-GGUF vicuna-7b-cot.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m vicuna-7b-cot.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Vicuna-7B-CoT-GGUF", model_file="vicuna-7b-cot.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Kevin Pro's Vicuna 7B CoT
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kevin Pro's Vicuna 7B CoT fp16
These files are pytorch format fp16 model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/kevinpro/Vicuna-7B-CoT).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kevin Pro's Vicuna 7B CoT
# Model Card for Model ID
SFT to enhance the CoT capabiliy of Vicuna
If you find the model helpful, please click "like" to support us. We also welcome feedback on your usage experience and any issues you encounter in the issues section.
Another 13B version: https://huggingface.co/kevinpro/Vicuna-13B-CoT
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
<!-- original-model-card end -->
|
maddes8cht/h2oai-h2ogpt-gm-oasst1-en-2048-falcon-7b-v3-gguf | maddes8cht | 2023-11-22T20:26:18Z | 497 | 1 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-01T13:30:07Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 - GGUF
- Model creator: [h2oai](https://huggingface.co/h2oai)
- Original model: [h2ogpt-gm-oasst1-en-2048-falcon-7b-v3](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate`, `torch` and `einops` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 4544)
(h): ModuleList(
(0-31): 32 x DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=4544, out_features=4672, bias=False)
(dense): Linear(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
Yntec/3DKX2 | Yntec | 2023-10-11T05:33:11Z | 497 | 2 | diffusers | [
"diffusers",
"safetensors",
"General",
"3D",
"Cartoon",
"unvailai",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-11T03:55:41Z | ---
license: other
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- 3D
- Cartoon
- unvailai
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
inference: false
---
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
# 3DKX 2
Check the license at the original page: https://huggingface.co/unvailai/3DKX_V2
Sample and prompt:

Photo of a standing figure of a cute five years old girl in front of a pc computer monitor in an old dirty soviet apartment by and mark brooks, vladimir volegov, rich deep colors. beksinski painting, from a movie by david cronenberg. masterpiece. photographed with leica summilux - m 2 4 mm lens, iso 1 0 0, f / |
Amu/tao | Amu | 2023-12-03T11:45:19Z | 497 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"zh",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2023-10-18T05:51:02Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: tao
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 47.33752515292192
- type: cos_sim_spearman
value: 49.940772056837176
- type: euclidean_pearson
value: 48.12147487857213
- type: euclidean_spearman
value: 49.9407519488174
- type: manhattan_pearson
value: 48.07550286372865
- type: manhattan_spearman
value: 49.89535645392862
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 50.976865711125626
- type: cos_sim_spearman
value: 53.113084748593465
- type: euclidean_pearson
value: 55.1209592747571
- type: euclidean_spearman
value: 53.11308362230699
- type: manhattan_pearson
value: 55.09799309322416
- type: manhattan_spearman
value: 53.108059998577076
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.812
- type: f1
value: 39.02060856097395
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 62.84336868097746
- type: cos_sim_spearman
value: 65.540605433497
- type: euclidean_pearson
value: 64.08759819387913
- type: euclidean_spearman
value: 65.54060543369363
- type: manhattan_pearson
value: 64.09334283385029
- type: manhattan_spearman
value: 65.55376209169398
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 39.964020691388505
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 38.18628830038994
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 85.34294439514511
- type: mrr
value: 88.03849206349206
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 85.87127698007234
- type: mrr
value: 88.57980158730159
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.484
- type: map_at_10
value: 36.3
- type: map_at_100
value: 38.181
- type: map_at_1000
value: 38.305
- type: map_at_3
value: 32.39
- type: map_at_5
value: 34.504000000000005
- type: mrr_at_1
value: 37.608999999999995
- type: mrr_at_10
value: 45.348
- type: mrr_at_100
value: 46.375
- type: mrr_at_1000
value: 46.425
- type: mrr_at_3
value: 42.969
- type: mrr_at_5
value: 44.285999999999994
- type: ndcg_at_1
value: 37.608999999999995
- type: ndcg_at_10
value: 42.675999999999995
- type: ndcg_at_100
value: 50.12799999999999
- type: ndcg_at_1000
value: 52.321
- type: ndcg_at_3
value: 37.864
- type: ndcg_at_5
value: 39.701
- type: precision_at_1
value: 37.608999999999995
- type: precision_at_10
value: 9.527
- type: precision_at_100
value: 1.555
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.547
- type: precision_at_5
value: 15.504000000000001
- type: recall_at_1
value: 24.484
- type: recall_at_10
value: 52.43299999999999
- type: recall_at_100
value: 83.446
- type: recall_at_1000
value: 98.24199999999999
- type: recall_at_3
value: 37.653
- type: recall_at_5
value: 43.643
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 77.71497294046902
- type: cos_sim_ap
value: 86.84542027578229
- type: cos_sim_f1
value: 79.31987247608926
- type: cos_sim_precision
value: 72.70601987142022
- type: cos_sim_recall
value: 87.2574234276362
- type: dot_accuracy
value: 77.71497294046902
- type: dot_ap
value: 86.86514752961159
- type: dot_f1
value: 79.31987247608926
- type: dot_precision
value: 72.70601987142022
- type: dot_recall
value: 87.2574234276362
- type: euclidean_accuracy
value: 77.71497294046902
- type: euclidean_ap
value: 86.84541456571337
- type: euclidean_f1
value: 79.31987247608926
- type: euclidean_precision
value: 72.70601987142022
- type: euclidean_recall
value: 87.2574234276362
- type: manhattan_accuracy
value: 77.8111846061335
- type: manhattan_ap
value: 86.81148050422539
- type: manhattan_f1
value: 79.41176470588236
- type: manhattan_precision
value: 72.52173913043478
- type: manhattan_recall
value: 87.74842179097499
- type: max_accuracy
value: 77.8111846061335
- type: max_ap
value: 86.86514752961159
- type: max_f1
value: 79.41176470588236
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 68.862
- type: map_at_10
value: 77.079
- type: map_at_100
value: 77.428
- type: map_at_1000
value: 77.432
- type: map_at_3
value: 75.40400000000001
- type: map_at_5
value: 76.227
- type: mrr_at_1
value: 69.02000000000001
- type: mrr_at_10
value: 77.04299999999999
- type: mrr_at_100
value: 77.391
- type: mrr_at_1000
value: 77.395
- type: mrr_at_3
value: 75.44800000000001
- type: mrr_at_5
value: 76.23299999999999
- type: ndcg_at_1
value: 69.02000000000001
- type: ndcg_at_10
value: 80.789
- type: ndcg_at_100
value: 82.27499999999999
- type: ndcg_at_1000
value: 82.381
- type: ndcg_at_3
value: 77.40599999999999
- type: ndcg_at_5
value: 78.87100000000001
- type: precision_at_1
value: 69.02000000000001
- type: precision_at_10
value: 9.336
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.889000000000003
- type: precision_at_5
value: 17.492
- type: recall_at_1
value: 68.862
- type: recall_at_10
value: 92.308
- type: recall_at_100
value: 98.84100000000001
- type: recall_at_1000
value: 99.684
- type: recall_at_3
value: 83.087
- type: recall_at_5
value: 86.617
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.063999999999997
- type: map_at_10
value: 78.014
- type: map_at_100
value: 81.021
- type: map_at_1000
value: 81.059
- type: map_at_3
value: 53.616
- type: map_at_5
value: 68.00399999999999
- type: mrr_at_1
value: 87.8
- type: mrr_at_10
value: 91.824
- type: mrr_at_100
value: 91.915
- type: mrr_at_1000
value: 91.917
- type: mrr_at_3
value: 91.525
- type: mrr_at_5
value: 91.752
- type: ndcg_at_1
value: 87.8
- type: ndcg_at_10
value: 85.74199999999999
- type: ndcg_at_100
value: 88.82900000000001
- type: ndcg_at_1000
value: 89.208
- type: ndcg_at_3
value: 84.206
- type: ndcg_at_5
value: 83.421
- type: precision_at_1
value: 87.8
- type: precision_at_10
value: 41.325
- type: precision_at_100
value: 4.8
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 75.783
- type: precision_at_5
value: 64.25999999999999
- type: recall_at_1
value: 25.063999999999997
- type: recall_at_10
value: 87.324
- type: recall_at_100
value: 97.261
- type: recall_at_1000
value: 99.309
- type: recall_at_3
value: 56.281000000000006
- type: recall_at_5
value: 73.467
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 46.800000000000004
- type: map_at_10
value: 56.887
- type: map_at_100
value: 57.556
- type: map_at_1000
value: 57.582
- type: map_at_3
value: 54.15
- type: map_at_5
value: 55.825
- type: mrr_at_1
value: 46.800000000000004
- type: mrr_at_10
value: 56.887
- type: mrr_at_100
value: 57.556
- type: mrr_at_1000
value: 57.582
- type: mrr_at_3
value: 54.15
- type: mrr_at_5
value: 55.825
- type: ndcg_at_1
value: 46.800000000000004
- type: ndcg_at_10
value: 62.061
- type: ndcg_at_100
value: 65.042
- type: ndcg_at_1000
value: 65.658
- type: ndcg_at_3
value: 56.52700000000001
- type: ndcg_at_5
value: 59.518
- type: precision_at_1
value: 46.800000000000004
- type: precision_at_10
value: 7.84
- type: precision_at_100
value: 0.9169999999999999
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 21.133
- type: precision_at_5
value: 14.12
- type: recall_at_1
value: 46.800000000000004
- type: recall_at_10
value: 78.4
- type: recall_at_100
value: 91.7
- type: recall_at_1000
value: 96.39999999999999
- type: recall_at_3
value: 63.4
- type: recall_at_5
value: 70.6
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 48.010773374374764
- type: f1
value: 35.25314495210735
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 87.01688555347093
- type: ap
value: 56.39167630414159
- type: f1
value: 81.91756262306008
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 71.17867432738112
- type: cos_sim_spearman
value: 77.47954247528372
- type: euclidean_pearson
value: 76.32408876437825
- type: euclidean_spearman
value: 77.47954025694959
- type: manhattan_pearson
value: 76.33345801575938
- type: manhattan_spearman
value: 77.48901582125997
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 27.96333052746654
- type: mrr
value: 26.92023809523809
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 66.144
- type: map_at_10
value: 75.036
- type: map_at_100
value: 75.36
- type: map_at_1000
value: 75.371
- type: map_at_3
value: 73.258
- type: map_at_5
value: 74.369
- type: mrr_at_1
value: 68.381
- type: mrr_at_10
value: 75.633
- type: mrr_at_100
value: 75.91799999999999
- type: mrr_at_1000
value: 75.928
- type: mrr_at_3
value: 74.093
- type: mrr_at_5
value: 75.036
- type: ndcg_at_1
value: 68.381
- type: ndcg_at_10
value: 78.661
- type: ndcg_at_100
value: 80.15
- type: ndcg_at_1000
value: 80.456
- type: ndcg_at_3
value: 75.295
- type: ndcg_at_5
value: 77.14999999999999
- type: precision_at_1
value: 68.381
- type: precision_at_10
value: 9.481
- type: precision_at_100
value: 1.023
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.309
- type: precision_at_5
value: 17.974
- type: recall_at_1
value: 66.144
- type: recall_at_10
value: 89.24499999999999
- type: recall_at_100
value: 96.032
- type: recall_at_1000
value: 98.437
- type: recall_at_3
value: 80.327
- type: recall_at_5
value: 84.733
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.26832548755884
- type: f1
value: 65.97422207086723
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.13046402151984
- type: f1
value: 72.69199129694121
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 50.4
- type: map_at_10
value: 56.645
- type: map_at_100
value: 57.160999999999994
- type: map_at_1000
value: 57.218
- type: map_at_3
value: 55.383
- type: map_at_5
value: 56.08800000000001
- type: mrr_at_1
value: 50.6
- type: mrr_at_10
value: 56.745999999999995
- type: mrr_at_100
value: 57.262
- type: mrr_at_1000
value: 57.318999999999996
- type: mrr_at_3
value: 55.483000000000004
- type: mrr_at_5
value: 56.188
- type: ndcg_at_1
value: 50.4
- type: ndcg_at_10
value: 59.534
- type: ndcg_at_100
value: 62.400999999999996
- type: ndcg_at_1000
value: 64.01299999999999
- type: ndcg_at_3
value: 56.887
- type: ndcg_at_5
value: 58.160000000000004
- type: precision_at_1
value: 50.4
- type: precision_at_10
value: 6.859999999999999
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 20.4
- type: precision_at_5
value: 12.86
- type: recall_at_1
value: 50.4
- type: recall_at_10
value: 68.60000000000001
- type: recall_at_100
value: 82.8
- type: recall_at_1000
value: 95.7
- type: recall_at_3
value: 61.199999999999996
- type: recall_at_5
value: 64.3
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 73.39666666666666
- type: f1
value: 72.86349039489504
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 73.36220898754738
- type: cos_sim_ap
value: 78.50300066088354
- type: cos_sim_f1
value: 75.39370078740157
- type: cos_sim_precision
value: 70.59907834101382
- type: cos_sim_recall
value: 80.8870116156283
- type: dot_accuracy
value: 73.36220898754738
- type: dot_ap
value: 78.50300066088354
- type: dot_f1
value: 75.39370078740157
- type: dot_precision
value: 70.59907834101382
- type: dot_recall
value: 80.8870116156283
- type: euclidean_accuracy
value: 73.36220898754738
- type: euclidean_ap
value: 78.50300066088354
- type: euclidean_f1
value: 75.39370078740157
- type: euclidean_precision
value: 70.59907834101382
- type: euclidean_recall
value: 80.8870116156283
- type: manhattan_accuracy
value: 73.09149972929075
- type: manhattan_ap
value: 78.41160715817406
- type: manhattan_f1
value: 75.3623188405797
- type: manhattan_precision
value: 69.45681211041853
- type: manhattan_recall
value: 82.36536430834214
- type: max_accuracy
value: 73.36220898754738
- type: max_ap
value: 78.50300066088354
- type: max_f1
value: 75.39370078740157
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 91.82000000000001
- type: ap
value: 89.3671278896903
- type: f1
value: 91.8021970144045
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 30.07022294131062
- type: cos_sim_spearman
value: 36.21542804954441
- type: euclidean_pearson
value: 36.37841945307606
- type: euclidean_spearman
value: 36.215513214835546
- type: manhattan_pearson
value: 36.31755715017088
- type: manhattan_spearman
value: 36.16848256918425
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 36.779755871073505
- type: cos_sim_spearman
value: 38.736220679196606
- type: euclidean_pearson
value: 37.13356686891227
- type: euclidean_spearman
value: 38.73619198602118
- type: manhattan_pearson
value: 37.175466658530816
- type: manhattan_spearman
value: 38.74523158724344
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.9737863254904
- type: cos_sim_spearman
value: 68.88293545840186
- type: euclidean_pearson
value: 67.23730973929247
- type: euclidean_spearman
value: 68.88293545840186
- type: manhattan_pearson
value: 67.30647960940956
- type: manhattan_spearman
value: 68.90553460682702
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 78.99371432933002
- type: cos_sim_spearman
value: 79.36496709214312
- type: euclidean_pearson
value: 78.77721120706431
- type: euclidean_spearman
value: 79.36500761622595
- type: manhattan_pearson
value: 78.82503201285202
- type: manhattan_spearman
value: 79.43915548337401
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.38418982516941
- type: mrr
value: 76.09996131153883
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 77.209
- type: map_at_100
value: 80.838
- type: map_at_1000
value: 80.903
- type: map_at_3
value: 54.196
- type: map_at_5
value: 66.664
- type: mrr_at_1
value: 90.049
- type: mrr_at_10
value: 92.482
- type: mrr_at_100
value: 92.568
- type: mrr_at_1000
value: 92.572
- type: mrr_at_3
value: 92.072
- type: mrr_at_5
value: 92.33
- type: ndcg_at_1
value: 90.049
- type: ndcg_at_10
value: 84.69200000000001
- type: ndcg_at_100
value: 88.25699999999999
- type: ndcg_at_1000
value: 88.896
- type: ndcg_at_3
value: 86.09700000000001
- type: ndcg_at_5
value: 84.68599999999999
- type: precision_at_1
value: 90.049
- type: precision_at_10
value: 42.142
- type: precision_at_100
value: 5.017
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.358
- type: precision_at_5
value: 63.173
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 83.59400000000001
- type: recall_at_100
value: 95.21
- type: recall_at_1000
value: 98.503
- type: recall_at_3
value: 55.849000000000004
- type: recall_at_5
value: 69.986
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 51.925999999999995
- type: f1
value: 50.16867723626971
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 60.738901671970005
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 57.08563183138733
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 52
- type: map_at_10
value: 62.956
- type: map_at_100
value: 63.491
- type: map_at_1000
value: 63.50599999999999
- type: map_at_3
value: 60.733000000000004
- type: map_at_5
value: 62.217999999999996
- type: mrr_at_1
value: 52
- type: mrr_at_10
value: 62.956
- type: mrr_at_100
value: 63.491
- type: mrr_at_1000
value: 63.50599999999999
- type: mrr_at_3
value: 60.733000000000004
- type: mrr_at_5
value: 62.217999999999996
- type: ndcg_at_1
value: 52
- type: ndcg_at_10
value: 67.956
- type: ndcg_at_100
value: 70.536
- type: ndcg_at_1000
value: 70.908
- type: ndcg_at_3
value: 63.456999999999994
- type: ndcg_at_5
value: 66.155
- type: precision_at_1
value: 52
- type: precision_at_10
value: 8.35
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 23.767
- type: precision_at_5
value: 15.58
- type: recall_at_1
value: 52
- type: recall_at_10
value: 83.5
- type: recall_at_100
value: 95.5
- type: recall_at_1000
value: 98.4
- type: recall_at_3
value: 71.3
- type: recall_at_5
value: 77.9
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 87.10000000000001
- type: ap
value: 70.81766065881429
- type: f1
value: 85.5323306120456
license: apache-2.0
language:
- zh
---
A try for emebdding model:
The method is the same as the stella-v2, I just fine-tuned it in a small dataset for test.
Now I'm working on the tao-v2, It will have a different sturcture.
I will release tao-v2 as fast as I can.
Thank you to the open source community. |
gorilla-llm/gorilla-7b-hf-v1-gguf | gorilla-llm | 2024-01-29T10:45:34Z | 497 | 1 | null | [
"gguf",
"region:us"
]
| null | 2024-01-29T10:41:36Z | Entry not found |
m-a-p/OpenCodeInterpreter-CL-7B | m-a-p | 2024-03-03T11:44:30Z | 497 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"arxiv:2402.14658",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-19T05:52:42Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
---
<h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1>
<p align="center">
<img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png">
</p>
<p align="center">
<a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a>
|
<a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a>
</p>
<hr>
## Introduction
OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities.
For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv.
## Model Information
This model is based on [CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf).
## Benchmark Scores
The OpenCodeInterpreter Models series exemplifies the evolution of coding model performance, particularly highlighting the significant enhancements brought about by the integration of execution feedback. In an effort to quantify these improvements, we present a detailed comparison across two critical benchmarks: HumanEval and MBPP. This comparison not only showcases the individual performance metrics on each benchmark but also provides an aggregated view of the overall performance enhancement. The subsequent table succinctly encapsulates the performance data, offering a clear perspective on how execution feedback contributes to elevating the models' capabilities in code interpretation and execution tasks.
| **Benchmark** | **HumanEval (+)** | **MBPP (+)** | **Average (+)** |
|---------------|-------------------|--------------|-----------------|
| **OpenCodeInterpreter-DS-1.3B** | 65.2 (61.0) | 63.4 (52.4) | 64.3 (56.7) |
| + Execution Feedback | 65.2 (62.2) | 65.2 (55.6) | 65.2 (58.9) |
| **OpenCodeInterpreter-DS-6.7B** | 76.2 (72.0) | 73.9 (63.7) | 75.1 (67.9) |
| + Execution Feedback | 81.1 (78.7) | 82.7 (72.4) | 81.9 (75.6) |
| + Synth. Human Feedback | 87.2 (86.6) | 86.2 (74.2) | 86.7 (80.4) |
| + Synth. Human Feedback (Oracle) | 89.7 (86.6) | 87.2 (75.2) | 88.5 (80.9) |
| **OpenCodeInterpreter-DS-33B** | 79.3 (74.3) | 78.7 (66.4) | 79.0 (70.4) |
| + Execution Feedback | 82.9 (80.5) | 83.5 (72.2) | 83.2 (76.4) |
| + Synth. Human Feedback | 88.4 (86.0) | 87.5 (75.9) | 88.0 (81.0) |
| + Synth. Human Feedback (Oracle) | 92.7 (89.7) | 90.5 (79.5) | 91.6 (84.6) |
| **OpenCodeInterpreter-CL-7B** | 72.6 (67.7) | 66.4 (55.4) | 69.5 (61.6) |
| + Execution Feedback | 75.6 (70.1) | 69.9 (60.7) | 72.8 (65.4) |
| **OpenCodeInterpreter-CL-13B** | 77.4 (73.8) | 70.7 (59.2) | 74.1 (66.5) |
| + Execution Feedback | 81.1 (76.8) | 78.2 (67.2) | 79.7 (72.0) |
| **OpenCodeInterpreter-CL-34B** | 78.0 (72.6) | 73.4 (61.4) | 75.7 (67.0) |
| + Execution Feedback | 81.7 (78.7) | 80.2 (67.9) | 81.0 (73.3) |
| **OpenCodeInterpreter-CL-70B** | 76.2 (70.7) | 73.0 (61.9) | 74.6 (66.3) |
| + Execution Feedback | 79.9 (77.4) | 81.5 (69.9) | 80.7 (73.7) |
| **OpenCodeInterpreter-GM-7B** | 56.1 (50.0) | 39.8 (34.6) | 48.0 (42.3) |
| + Execution Feedback | 64.0 (54.3) | 48.6 (40.9) | 56.3 (47.6) |
| **OpenCodeInterpreter-SC2-3B** | 65.2 (57.9) | 62.7 (52.9) | 64.0 (55.4) |
| + Execution Feedback | 67.1 (60.4) | 63.4 (54.9) | 65.3 (57.7) |
| **OpenCodeInterpreter-SC2-7B** | 73.8 (68.9) | 61.7 (51.1) | 67.8 (60.0) |
| + Execution Feedback | 75.6 (69.5) | 66.9 (55.4) | 71.3 (62.5) |
| **OpenCodeInterpreter-SC2-15B** | 75.6 (69.5) | 71.2 (61.2) | 73.4 (65.4) |
| + Execution Feedback | 77.4 (72.0) | 74.2 (63.4) | 75.8 (67.7) |
*Note: The "(+)" notation represents scores from extended versions of the HumanEval and MBPP benchmarks. To ensure a fair comparison, the results shown for adding execution feedback are based on outcomes after just one iteration of feedback, without unrestricted iterations. This approach highlights the immediate impact of execution feedback on performance improvements across benchmarks.*
## Model Usage
### Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path="m-a-p/OpenCodeInterpreter-CL-7B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
prompt = "Write a function to find the shared elements from the given two lists."
inputs = tokenizer.apply_chat_template(
[{'role': 'user', 'content': prompt }],
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=1024,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
## Contact
If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected].
We're here to assist you!" |
liamhvn/epiCPhotoGasm-ultimate-fidelity | liamhvn | 2024-04-08T08:27:53Z | 497 | 1 | diffusers | [
"diffusers",
"safetensors",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-04-08T07:00:01Z | Entry not found |
habulaj/1712704710052x226978808855832450 | habulaj | 2024-04-10T00:09:36Z | 497 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:decastro/David_david",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2024-04-09T23:18:44Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of TOK
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- decastro/David_david
---
# LoRA DreamBooth - squaadinc/1712704710052x226978808855832450
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
A photo of TOK
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'squaadinc/1712704710052x226978808855832450',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic A photo of TOK jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
PrunaAI/Llama-3-8b-64k-PoSE-GGUF-smashed | PrunaAI | 2024-04-25T21:13:26Z | 497 | 2 | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-04-25T20:16:33Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the winglian/Llama-3-8b-64k-PoSE model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Llama-3-8b-64k-PoSE-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Llama-3-8b-64k-PoSE-GGUF-smashed Llama-3-8b-64k-PoSE.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Llama-3-8b-64k-PoSE-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Llama-3-8b-64k-PoSE-GGUF-smashed Llama-3-8b-64k-PoSE.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Llama-3-8b-64k-PoSE.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Llama-3-8b-64k-PoSE.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Llama-3-8b-64k-PoSE.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
yartyjung/Fake-Review-Detector | yartyjung | 2024-05-29T12:55:23Z | 497 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-27T11:27:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danielhanchen/gguf-16062024 | danielhanchen | 2024-06-15T16:31:14Z | 497 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-15T16:19:43Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** danielhanchen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arcee-ai/Llama-3-SEC-Chat-GGUF | arcee-ai | 2024-06-19T01:01:39Z | 497 | 7 | null | [
"gguf",
"large_language_model",
"finance",
"sec_data",
"continual_pre_training",
"text-generation",
"en",
"dataset:SEC_filings",
"license:llama3",
"region:us"
]
| text-generation | 2024-06-18T16:41:52Z | ---
language: en
license: llama3
tags:
- large_language_model
- finance
- sec_data
- continual_pre_training
datasets:
- SEC_filings
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-SEC-Chat
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3166">b3166</a> for quantization.
Original model: https://huggingface.co/arcee-ai/Llama-3-SEC-Chat
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-SEC-Chat-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF/tree/main/Llama-3-SEC-Chat-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-SEC-Chat-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF/tree/main/Llama-3-SEC-Chat-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-SEC-Chat-Q5_K_L.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-Q5_K_L.gguf) | Q5_K_L | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
| [Llama-3-SEC-Chat-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF/blob/main/Llama-3-SEC-Chat-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
| [Llama-3-SEC-Chat-Q4_K_L.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-Q4_K_L.gguf) | Q4_K_L | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-SEC-Chat-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF/blob/main/Llama-3-SEC-Chat-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-SEC-Chat-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-IQ4_XS.gguf) | IQ4_XS | | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-SEC-Chat-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF/blob/main/Llama-3-SEC-Chat-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
| [Llama-3-SEC-Chat-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-IQ3_M.gguf) | IQ3_M | | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-SEC-Chat-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-Q3_K_S.gguf) | Q3_K_S | | Low quality, not recommended. |
| [Llama-3-SEC-Chat-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-IQ3_XXS.gguf) | IQ3_XXS | | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-SEC-Chat-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF/blob/main/Llama-3-SEC-Chat-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
| [Llama-3-SEC-Chat-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-IQ2_M.gguf) | IQ2_M | | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-SEC-Chat-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-IQ2_XS.gguf) | IQ2_XS | | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-SEC-Chat-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-IQ2_XXS.gguf) | IQ2_XXS | | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-SEC-Chat-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-SEC-Chat-GGUF//main/Llama-3-SEC-Chat-IQ1_M.gguf) | IQ1_M | | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-SEC-Chat-GGUF --include "Llama-3-SEC-Chat-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-SEC-Chat-GGUF --include "Llama-3-SEC-Chat-Q8_0.gguf/*" --local-dir Llama-3-SEC-Chat-Q8_0
```
You can either specify a new local-dir (Llama-3-SEC-Chat-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
numind/NuExtract-large | numind | 2024-06-28T06:33:02Z | 497 | 87 | transformers | [
"transformers",
"safetensors",
"phi3small",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-06-19T13:17:59Z | ---
license: mit
language:
- en
---
# Structure Extraction Model by NuMind 🔥
NuExtract-large is a version of [phi-3-small](https://huggingface.co/microsoft/Phi-3-small-8k-instruct), fine-tuned on a private high-quality synthetic dataset for information extraction. To use the model, provide an input text (less than 2000 tokens) and a JSON template describing the information you need to extract.
Note: This model is purely extractive, so all text output by the model is present as is in the original text. You can also provide an example of output formatting to help the model understand your task more precisely.
Try the base model here: https://huggingface.co/spaces/numind/NuExtract
We also provide a tiny (0.5B) and base (3.8B) version of this model: [NuExtract-tiny](https://huggingface.co/numind/NuExtract-tiny) and [NuExtract](https://huggingface.co/numind/NuExtract)
**Checkout other models by NuMind:**
* SOTA Zero-shot NER Model [NuNER Zero](https://huggingface.co/numind/NuNER_Zero)
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
## Benchmark
Benchmark 0 shot (will release soon):
<p align="left">
<img src="result.png" width="600">
</p>
Benchmark fine-tunning (see blog post):
<p align="left">
<img src="result_ft.png" width="600">
</p>
## Usage
To use the model:
```python
import json
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
def predict_NuExtract(model, tokenizer, text, schema, example=["","",""]):
schema = json.dumps(json.loads(schema), indent=4)
input_llm = "<|input|>\n### Template:\n" + schema + "\n"
for i in example:
if i != "":
input_llm += "### Example:\n"+ json.dumps(json.loads(i), indent=4)+"\n"
input_llm += "### Text:\n"+text +"\n<|output|>\n"
input_ids = tokenizer(input_llm, return_tensors="pt", truncation=True, max_length=4000).to("cuda")
output = tokenizer.decode(model.generate(**input_ids)[0], skip_special_tokens=True)
return output.split("<|output|>")[1].split("<|end-output|>")[0]
model = AutoModelForCausalLM.from_pretrained("numind/NuExtract", trust_remote_code=True, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("numind/NuExtract", trust_remote_code=True)
model.to("cuda")
model.eval()
text = """We introduce Mistral 7B, a 7–billion-parameter language model engineered for
superior performance and efficiency. Mistral 7B outperforms the best open 13B
model (Llama 2) across all evaluated benchmarks, and the best released 34B
model (Llama 1) in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding
window attention (SWA) to effectively handle sequences of arbitrary length with a
reduced inference cost. We also provide a model fine-tuned to follow instructions,
Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and
automated benchmarks. Our models are released under the Apache 2.0 license.
Code: https://github.com/mistralai/mistral-src
Webpage: https://mistral.ai/news/announcing-mistral-7b/"""
schema = """{
"Model": {
"Name": "",
"Number of parameters": "",
"Number of token": "",
"Architecture": []
},
"Usage": {
"Use case": [],
"Licence": ""
}
}"""
prediction = predict_NuExtract(model, tokenizer, text, schema, example=["","",""])
print(prediction)
``` |
jdcrutchley/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF | jdcrutchley | 2024-06-19T16:17:33Z | 497 | 0 | null | [
"gguf",
"uncensored",
"llama3",
"instruct",
"open",
"llama-cpp",
"gguf-my-repo",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"model-index",
"region:us"
]
| null | 2024-06-19T16:17:12Z | ---
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
license: llama3
tags:
- uncensored
- llama3
- instruct
- open
- llama-cpp
- gguf-my-repo
model-index:
- name: Llama-3-8B-Lexi-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
---
# jdcrutchley/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF
This model was converted to GGUF format from [`Orenguteng/Llama-3-8B-Lexi-Uncensored`](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jdcrutchley/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jdcrutchley/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jdcrutchley/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jdcrutchley/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF --hf-file llama-3-8b-lexi-uncensored-q4_k_m.gguf -c 2048
```
|
cognitivecomputations/dolphin-2.9.3-mistral-7B-32k | cognitivecomputations | 2024-06-25T03:39:55Z | 497 | 25 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"conversational",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-25T02:12:42Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- generated_from_trainer
- axolotl
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# Dolphin 2.9.3 Mistral 7b v0.3 32k 🐬
Curated and trained by Eric Hartford and Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9.3:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
- [OnDemand](https://on-demand.io/) - provided inference sponsorship
This model is based on mistralai/Mistral-7B-v0.3, and is governed by the apache 2.0 license.
The base model has 32k context, and our finetuning took place with 8192 sequence length.
Dolphin 2.9.3 uses ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.3 has a variety of instruction following, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to apache 2.0 license. We grant permission for any use, including commercial. Dolphin was trained on data generated from GPT4, among other models.
## Evals

## Training
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.3
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
# load_in_4bit: true
strict: false
datasets:
- path: /workspace/datasets/dolphin-2.9.3/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/SystemChat_filtered_sharegpt.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/SystemChat_multilingual_sharegpt.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.3/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
chat_template: chatml
# adapter: qlora
# lora_r: 128
# lora_alpha: 16
# lora_modules_to_save: [embed_tokens, lm_head]
# lora_dropout: 0.05
# lora_target_linear: true
dataset_prepared_path: /workspace/axolotl/dolph-2.9.3-prepared
val_set_size: 0.01
output_dir: /workspace/axolotl/dolphin-2.9.3-mistral-7B
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
wandb_project: dolphin-2.9.3-Mistral-7B
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 5e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32:
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
# evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
save_total_limit: 2
save_steps:
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
tokens:
- "<|im_start|>"
```
|
nickmuchi/segformer-b4-finetuned-segments-sidewalk | nickmuchi | 2022-03-21T07:32:43Z | 496 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"dataset:segments/sidewalk-semantic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2022-03-20T06:54:20Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
widget:
- src: https://drive.google.com/uc?id=1-ae6Vtvs-fO1j0D2kxEDX4rKxRipda2j
example_title: Sidewalk with traffic
- src: https://drive.google.com/uc?id=1-dwxxF6LzbEvATr_mwvrAjot-DdBLAM4
example_title: Sidewalk with buildings
datasets:
- segments/sidewalk-semantic
model-index:
- name: segformer-b4-finetuned-segments-sidewalk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6463
- Mean Accuracy: 0.5168
- Mean Iou: 0.4317
- Overall Accuracy: 0.8895
- Per Category Accuracy: [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0]
- Per Category Iou: [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy | Per Category Accuracy | Per Category Iou |
|:-------------:|:-----:|:-----:|:---------------:|:-------------:|:--------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.0086 | 0.25 | 100 | 0.9195 | 0.2302 | 0.1742 | 0.7405 | [nan, 0.754391784765388, 0.8738098328493714, 0.0, 0.6095047025690915, 0.04406067496837279, nan, 0.11344860810198232, 0.03344878303363856, 0.0, 0.9451322667227594, 0.0, 0.0, 0.0, 0.0, 8.118464635968046e-06, 0.0, 0.0, 0.8406900175689528, 0.0, 0.33313290995723815, 0.007980320315659196, 0.0, nan, 0.0, 0.01001465431517245, 0.0, 0.0, 0.9094842682836028, 0.9104621468677264, 0.9500069670140131, 0.0, 0.0, 0.030522857924993155, 0.0] | [nan, 0.5181348731869903, 0.7666613623083653, 0.0, 0.3145052392920833, 0.040279298027504136, nan, 0.09896279300890763, 0.0332534621335044, 0.0, 0.707185048053476, 0.0, 0.0, 0.0, 0.0, 8.11839872703508e-06, 0.0, 0.0, 0.6129636976206597, 0.0, 0.21304181051016494, 0.007979819175153202, 0.0, nan, 0.0, 0.009972716399085856, 0.0, 0.0, 0.8032595523715207, 0.5644424403160349, 0.8548000615746258, 0.0, 0.0, 0.02810796628175876, 0.0] |
| 0.6465 | 0.5 | 200 | 0.7250 | 0.2963 | 0.2416 | 0.7963 | [nan, 0.8965158332325365, 0.9203420775747997, 0.0005677570093457944, 0.42947876549598557, 0.20108992228390948, nan, 0.6149826174335852, 0.6106893770460692, 0.0, 0.9320756176369465, 0.0, 0.0, 0.0, 0.0, 0.23413652010131844, 0.0, 0.0, 0.9437607244807804, 0.0, 0.2033741348512844, 0.2597617238717267, 0.0, nan, 0.0, 0.21746480347516617, 0.0, 0.0, 0.8793454644762622, 0.8380851985041863, 0.9445753860505853, 0.0, 0.0, 0.35629926758549024, 0.0] | [nan, 0.6645359168510458, 0.8064416600263559, 0.000566105647428005, 0.4116417722563792, 0.17504073239500048, nan, 0.34611894249410324, 0.4768988514264542, 0.0, 0.7872815412923856, 0.0, 0.0, 0.0, 0.0, 0.22760454893418883, 0.0, 0.0, 0.6497218142931416, 0.0, 0.16433182458127107, 0.24025960226620707, 0.0, nan, 0.0, 0.1865917623179034, 0.0, 0.0, 0.8237045305017561, 0.6485287252686867, 0.8916263487480074, 0.0, 0.0, 0.23161660227979464, 0.0] |
| 0.6777 | 1.0 | 400 | 0.6645 | 0.3343 | 0.2755 | 0.8205 | [nan, 0.8955600256602996, 0.9528284776336102, 0.20619042056074766, 0.4578573681184769, 0.34171859852352976, nan, 0.5150824142204389, 0.8000759121317076, 0.0, 0.9308408861203066, 0.0, 0.0, 0.0, 0.0, 0.8202247191011236, 0.0, 0.0, 0.931415684238172, 0.0, 0.22729327499111263, 0.2807173404242283, 0.0, nan, 0.0, 0.3332993143873973, 0.0, 0.0, 0.904612735522824, 0.9085503237620377, 0.9531456202767545, 0.0, 0.0, 0.2395403274915222, 0.0] | [nan, 0.7091852218081763, 0.8215012473174504, 0.20316384883142716, 0.449169741519482, 0.2820828827399737, nan, 0.4034439348068946, 0.5801054036574794, 0.0, 0.8406284073872154, 0.0, 0.0, 0.0, 0.0, 0.5491287380561565, 0.0, 0.0, 0.6833033543785748, 0.0, 0.196701947180513, 0.26816266986235426, 0.0, nan, 0.0, 0.2624543573765898, 0.0, 0.0, 0.8319417451247856, 0.6328739755697549, 0.9148380247362377, 0.0, 0.0, 0.18610354253000033, 0.0] |
| 0.4931 | 1.25 | 500 | 0.6513 | 0.3693 | 0.2930 | 0.8232 | [nan, 0.8195930838546497, 0.9565826472101743, 0.3660338785046729, 0.502483997738174, 0.5101274819814215, nan, 0.6120499018406388, 0.8168524932390757, 0.0, 0.9680832750475287, 0.0, 0.0, 0.0, 0.0, 0.7678687406637656, 0.0, 0.0, 0.9132467503439181, 0.07463699730127982, 0.3080053777834345, 0.3700341269744017, 0.0, nan, 0.0, 0.3144554351808238, 0.0, 0.0, 0.8719933435243034, 0.9280312013943278, 0.9461371807749148, 0.0, 0.3623930581804142, 0.40862556355693114, 0.0] | [nan, 0.7255301419742964, 0.8322765227346863, 0.3328323011716717, 0.4866977152337555, 0.31646114214929966, nan, 0.4116248877039441, 0.584768070212383, 0.0, 0.7940437031847611, 0.0, 0.0, 0.0, 0.0, 0.5384221282312557, 0.0, 0.0, 0.7148576049798162, 0.06922710729587371, 0.23689839512021127, 0.330131038978254, 0.0, nan, 0.0, 0.25964434649208096, 0.0, 0.0, 0.8276496500163791, 0.5924934568973941, 0.9145898275185997, 0.0, 0.10460157785142388, 0.3046522912622977, 0.0] |
| 0.1718 | 2.0 | 800 | 0.5338 | 0.3766 | 0.3117 | 0.8521 | [nan, 0.9149980619048741, 0.9439616375983239, 0.49970093457943926, 0.7343188057936092, 0.4654595153245685, nan, 0.4401632944315461, 0.7951368790624852, 0.0, 0.9516775700030986, 0.0, 0.0, 0.0, 0.0, 0.7842599207637851, 0.0, 0.0, 0.9120325078402151, 0.0, 0.5436783980174178, 0.289193941696178, 0.0, nan, 0.0, 0.4040691893023499, 0.04438191043850125, 0.0, 0.9289921718405059, 0.9105179916825697, 0.9579859465374478, 0.0, 0.00014225040134934668, 0.5310102962619485, 0.0] | [nan, 0.7682867926029272, 0.863978713337328, 0.3619354489331745, 0.619807980106986, 0.4001297195410576, nan, 0.37693255173950874, 0.6055069405805374, 0.0, 0.8443884543167844, 0.0, 0.0, 0.0, 0.0, 0.5757144134211389, 0.0, 0.0, 0.7512958252799772, 0.0, 0.35684944134400076, 0.2822025918120264, 0.0, nan, 0.0, 0.3086991377431782, 0.04423000485801351, 0.0, 0.8578322873273115, 0.6920597473565505, 0.9258143343645202, 0.0, 0.00013209541062801931, 0.3399454223242722, 0.0] |
| 1.7925 | 2.25 | 900 | 0.5745 | 0.3877 | 0.3157 | 0.8463 | [nan, 0.9373443718928436, 0.8936817705653165, 0.5237184579439252, 0.785620810686892, 0.5932309765570626, nan, 0.5731998228133042, 0.7751909664563268, 0.0, 0.9330254836699918, 0.0, 0.0, 0.0, 0.0, 0.8874780801454829, 0.0, 0.0, 0.9253989025665076, 0.0, 0.49743326413606553, 0.3720606075459213, 0.0, nan, 0.0, 0.362670748940179, 0.2263189382021227, 0.0, 0.9355852115710428, 0.9121195658169062, 0.9653801272784691, 0.0, 0.09587677050945966, 0.21074794549629322, 0.0] | [nan, 0.7666762008063966, 0.8459820722288737, 0.35589376130270695, 0.6602856629180212, 0.391087786259542, nan, 0.4283483218139711, 0.618615992154992, 0.0, 0.8563419873974479, 0.0, 0.0, 0.0, 0.0, 0.4695442264821982, 0.0, 0.0, 0.7387838557909564, 0.0, 0.3568544684209477, 0.3548962568907604, 0.0, nan, 0.0, 0.28509334019028026, 0.21794051124482566, 0.0, 0.8588025306782998, 0.6960344960020876, 0.927551192360457, 0.0, 0.09183812508516147, 0.18221393560509547, 0.0] |
| 0.4287 | 2.5 | 1000 | 0.5140 | 0.4156 | 0.3337 | 0.8596 | [nan, 0.9114284539509796, 0.9599424299786812, 0.3729602803738318, 0.6955020648206622, 0.6337076451002155, nan, 0.648796319756489, 0.9076149357119134, 0.0, 0.9333320442069727, 0.0, 0.0, 0.0, 0.0, 0.837638825745275, 0.0, 0.0, 0.8487128760410935, 0.14962168247818672, 0.7450834097721757, 0.4416333770387344, 0.0, nan, 0.005162707675408485, 0.4304364892447794, 0.29855310097272386, 0.0, 0.9243997842101966, 0.9100753698167738, 0.9780073694330464, 0.0, 0.3377837387469772, 0.3283183517042185, 0.0] | [nan, 0.8056652041667661, 0.868478873207236, 0.36872340720413566, 0.648560287656455, 0.4227995307199668, nan, 0.5211383920382058, 0.5417303836612635, 0.0, 0.8614512323591124, 0.0, 0.0, 0.0, 0.0, 0.4902451772308277, 0.0, 0.0, 0.7414797203702529, 0.1034994187677877, 0.37103542329614997, 0.38941938864817555, 0.0, nan, 0.004775330844065127, 0.3339817219387496, 0.27392303157209946, 0.0, 0.8695462814099766, 0.7123344518279238, 0.9249476057387171, 0.0, 0.15441354067963511, 0.2686663032210652, 0.0] |
| 0.2477 | 2.75 | 1100 | 0.5852 | 0.3976 | 0.3245 | 0.8501 | [nan, 0.9240898770490549, 0.9130342916084687, 0.5360268691588785, 0.6767027987344469, 0.5151102302165186, nan, 0.6523417772790812, 0.8782321962328604, 0.0, 0.9459085723287141, 0.01212233473285585, 0.0, 0.0, 0.0, 0.8298613366240176, 0.0, 0.0, 0.8996769125664682, 0.0046441166244474245, 0.58637589184745, 0.4359797566385237, 0.0, nan, 0.0, 0.4451038886272047, 0.26994748620682013, 0.0, 0.9522730369995648, 0.9058973503358962, 0.9744264856283144, 0.024141075054913176, 0.024040317828039587, 0.315675681715336, 0.0] | [nan, 0.7635041179698989, 0.8504428879888529, 0.32134395517814934, 0.5814428391874907, 0.4398125968608028, nan, 0.5183108660060791, 0.5876442483214019, 0.0, 0.8637126471579993, 0.010904378413403684, 0.0, 0.0, 0.0, 0.5582717546245474, 0.0, 0.0, 0.7543635882159604, 0.004548919124920941, 0.3707771520336274, 0.37139606254827867, 0.0, nan, 0.0, 0.32640450731902027, 0.25674365674787153, 0.0, 0.8589069009951039, 0.7216899081490464, 0.9303705560523882, 0.023933704665274814, 0.02273469779955799, 0.24717820737291407, 0.0] |
| 0.2092 | 3.5 | 1400 | 0.5305 | 0.4215 | 0.3450 | 0.8615 | [nan, 0.8854690236777607, 0.9752597083363964, 0.4837301401869159, 0.7543174059151941, 0.32120495047431574, nan, 0.6121067808383275, 0.8640129050623903, 0.006110443680351299, 0.9472197081638014, 0.22567300568041493, 0.0, 0.0, 0.0, 0.849337533285705, 0.0, 0.0, 0.9323370763681338, 0.09924833192602527, 0.4992824257958052, 0.5897763059541461, 0.0, nan, 0.005025401620211451, 0.5194038833935207, 0.26516141898030177, 0.0, 0.9098213390526053, 0.9140251839431679, 0.9696367307434691, 0.0, 0.46129773009002417, 0.39953043905763785, 0.0] | [nan, 0.8279523588823188, 0.8503094621684615, 0.4166789099025304, 0.6531647345358885, 0.2970569371138754, nan, 0.4891076127233826, 0.6267720763107083, 0.0060749588138385505, 0.8628731375345856, 0.1638621555382868, 0.0, 0.0, 0.0, 0.5868382377688277, 0.0, 0.0, 0.766351782387915, 0.08906272053962098, 0.3548571571167739, 0.42844759670807536, 0.0, nan, 0.004661470273574813, 0.3559905085937402, 0.24649831094998764, 0.0, 0.8706735405566627, 0.7172875061476175, 0.937101627261161, 0.0, 0.18277266944717308, 0.30403604315996224, 0.0] |
| 0.1763 | 3.75 | 1500 | 0.5284 | 0.4184 | 0.3549 | 0.8725 | [nan, 0.9155522786024052, 0.9647682266779387, 0.44949532710280377, 0.7917047766525447, 0.5148885009996292, nan, 0.6544609508444807, 0.8639037813730607, 0.006400430838062886, 0.9591118988406824, 0.21581460442907713, 0.0, 0.0, 0.0, 0.8629440800155874, 0.0, 0.0, 0.9189088001847752, 0.0, 0.553022223587637, 0.46456492702831864, 0.0, nan, 0.09048469037484554, 0.4453708065107029, 0.3956482240588509, 0.0, 0.9463804808607508, 0.8827003794689641, 0.9646183286805874, 0.0, 0.10191225182385336, 0.42574316887992536, 0.0] | [nan, 0.8411073731152799, 0.8690976727110442, 0.4122661523625844, 0.6761261173524866, 0.4325420396336731, nan, 0.5235010874548043, 0.6267662599177323, 0.006377182482354398, 0.8589461626478264, 0.21441570391575504, 0.0, 0.0, 0.0, 0.5785872529434498, 0.0, 0.0, 0.7644870697544361, 0.0, 0.3931242258826368, 0.4137160566746283, 0.0, nan, 0.07477420233286435, 0.3486446014515762, 0.35308773803167826, 0.0, 0.8775350307334798, 0.7615382190401359, 0.9362335277343948, 0.0, 0.08161239401780339, 0.3123361865981938, 0.0] |
| 0.227 | 4.0 | 1600 | 0.5923 | 0.4426 | 0.3538 | 0.8544 | [nan, 0.9577374173182539, 0.9166854278467985, 0.1959217289719626, 0.7810987315371373, 0.5809225413617377, nan, 0.5835888579214346, 0.8662428239312995, 0.024607481668668958, 0.960621119945819, 0.44992590763151397, 0.0, 0.0, 0.0, 0.890757939858414, 0.0, 0.0, 0.8824976680624833, 0.23107998476795974, 0.6677916708726317, 0.5485129952087443, 0.0, nan, 0.13447755045997528, 0.4840215627780395, 0.4094524827723738, 0.0, 0.9258667409261705, 0.8784809934585728, 0.9680485743444954, 0.0, 0.5403279887825397, 0.2843078375615234, 0.0] | [nan, 0.732742632898181, 0.85248637631468, 0.1937195271972472, 0.6916132972252533, 0.4613544304478555, nan, 0.5019837033874182, 0.6339381818434339, 0.024391746227286727, 0.8507334888775837, 0.3399262956570416, 0.0, 0.0, 0.0, 0.5118086361876507, 0.0, 0.0, 0.7596215991272331, 0.14059847786558677, 0.3924964359231432, 0.4511581321221818, 0.0, nan, 0.11381225741975969, 0.3543174804464886, 0.36413975210357263, 0.0, 0.8783724167054704, 0.7445500851078998, 0.9377100490542223, 0.0, 0.1494074611014649, 0.24185599444907813, 0.0] | |
| 0.3219 | 4.75 | 1900 | 0.5306 | 0.4360 | 0.3684 | 0.8771 | [nan, 0.9383015101174155, 0.9581139041020363, 0.4607803738317757, 0.811509517207101, 0.6291153866526402, nan, 0.6505845609717001, 0.814323670351568, 0.021541903144289325, 0.9406027168809682, 0.41314727916357946, 0.0, 0.0, 0.0, 0.8354955510813795, 0.0, 0.0, 0.9418887586641801, 0.05121773539297008, 0.6343575406735104, 0.518250578994449, 0.0, nan, 0.027131676506933957, 0.4585466059559324, 0.39812988854667525, 0.0, 0.9202410996786, 0.895342680330491, 0.9736189575948254, 0.00016059513448547392, 0.336889593367067, 0.32415208076113006, 0.0] | [nan, 0.8286943759948178, 0.8911330146359255, 0.44085585238189445, 0.7563455702043241, 0.44281982228819555, nan, 0.5389345827619121, 0.6390151642075557, 0.02125355077350663, 0.8721853143259732, 0.34406869718732325, 0.0, 0.0, 0.0, 0.6106328062420269, 0.0, 0.0, 0.7642481786905918, 0.04822404265103627, 0.40217085841005906, 0.4365575304022451, 0.0, nan, 0.02300777793302594, 0.35943746679548483, 0.36207556675062974, 0.0, 0.8758467465629671, 0.7286601531442717, 0.9422882468777368, 0.00016028416831905857, 0.18664925297515172, 0.274341743647937, 0.0] | |
| 0.3758 | 5.25 | 2100 | 0.5413 | 0.4400 | 0.3618 | 0.8749 | [nan, 0.9446099997724584, 0.9535776804748952, 0.5333586448598131, 0.7118822151738956, 0.5725146926401914, nan, 0.637704053404208, 0.8958248327560848, 0.02011268072413936, 0.9449676672959805, 0.4536305260558163, 0.0, 0.0, 0.0, 0.8527716438267194, 0.0, 0.0, 0.9263943868758329, 0.13527541846719315, 0.6231382204452325, 0.5343291629394538, 0.0, nan, 0.07845667993958534, 0.48360548490082167, 0.39496133478097095, 0.0, 0.9342636737434504, 0.9081380373512183, 0.9754223113378334, 0.0, 0.0686053364221992, 0.4949887428280921, 0.0] | [nan, 0.8421459412186475, 0.884886678991681, 0.3243137842681656, 0.6975183850797184, 0.4470212561315764, nan, 0.5491953906967838, 0.5880944000946866, 0.01971493543409405, 0.8720965863289499, 0.2829941580535405, 0.0, 0.0, 0.0, 0.5648458841496203, 0.0, 0.0, 0.7876641278543601, 0.11773309221380866, 0.4507472099997672, 0.4306682617343027, 0.0, nan, 0.053795025325274436, 0.35687388479928317, 0.3506028598965402, 0.0, 0.8763044901374653, 0.7342806685419377, 0.9417441335611155, 0.0, 0.05263732322996086, 0.3527909231538019, 0.0] |
| 0.1962 | 6.0 | 2400 | 0.5252 | 0.4591 | 0.3755 | 0.8678 | [nan, 0.8788767058796604, 0.9301585587737999, 0.5368457943925233, 0.8328600223823257, 0.6594750437607246, nan, 0.7274099889861577, 0.8314845566257058, 0.20671941671154564, 0.9452567774639331, 0.5536552235119783, 0.0, 0.0, 0.0, 0.8969685653049295, 0.0, 0.0, 0.9273548947094251, 0.04859351976026093, 0.6165535079211122, 0.5024186037962429, 0.0, nan, 0.07840175751750653, 0.49256293504998166, 0.4105160532671556, 0.0, 0.928572042963352, 0.9119196275909236, 0.976082967184019, 0.09759262712918065, 0.23430673250828102, 0.4679128700481014, 0.0] | [nan, 0.8020983983063393, 0.8683865888896747, 0.4544978013913642, 0.6680523786513721, 0.4517445785165809, nan, 0.5857034011566181, 0.6746845091894639, 0.18334129404416358, 0.8638403093611754, 0.3497406295097313, 0.0, 0.0, 0.0, 0.5136113874503752, 0.0, 0.0, 0.7818072530904586, 0.04626054062573883, 0.40338464571865573, 0.41853055526845995, 0.0, nan, 0.05885020509966401, 0.3764221220090192, 0.37385233165849424, 0.0, 0.8760216287329546, 0.7184759765101966, 0.9447723343539753, 0.07888984275215143, 0.17396158662623154, 0.3506487661563549, 0.0] |
| 0.2721 | 6.25 | 2500 | 0.5120 | 0.4726 | 0.3905 | 0.8834 | [nan, 0.9352277032235452, 0.9553332100455781, 0.5201098130841122, 0.8315588432600179, 0.6507746356557826, nan, 0.7171028251625792, 0.8676946434502064, 0.12399022329011143, 0.9414992885437384, 0.5631225817074175, 0.0, 0.0, 0.0, 0.8815434824965902, 0.0, 0.0, 0.9265160801760165, 0.12371893574396928, 0.6983379489227609, 0.496123187961817, 0.0, nan, 0.1353837704242757, 0.5335426806929398, 0.5267111298220735, 0.0, 0.9267000099723489, 0.9157963608485102, 0.9708294620227798, 0.0039371710389987154, 0.44802779979272084, 0.43061615557802646, 0.0] | [nan, 0.847290915944923, 0.8918843187400161, 0.4215259288995603, 0.7694117638497967, 0.498788432969163, nan, 0.5567520477680967, 0.6726198795136411, 0.11618337797445752, 0.8753637372987935, 0.42321077786886513, 0.0, 0.0, 0.0, 0.581673157378788, 0.0, 0.0, 0.7933263418076343, 0.10532064834390416, 0.437053368284101, 0.4288208971032145, 0.0, nan, 0.09955372468245795, 0.3973712316699539, 0.442531089433316, 0.0, 0.880946087123613, 0.7345359613309864, 0.9452321649786941, 0.003849095209395844, 0.23329171252010497, 0.3386007935784502, 0.0] |
| 0.2409 | 6.5 | 2600 | 0.5224 | 0.4636 | 0.3840 | 0.8786 | [nan, 0.8731382676849351, 0.9738163801183563, 0.5331343457943926, 0.8196854363098576, 0.6540081867354192, nan, 0.6300072908533401, 0.8875978554822792, 0.13449190107295247, 0.955765201040042, 0.6083600889108421, 0.0, 0.03281733746130031, 0.0, 0.8703400012989544, 0.0, 0.0, 0.9262836625295774, 0.08389211741916257, 0.6663345782989761, 0.5452994228436286, 0.0, nan, 0.13288480021968968, 0.47811535039514313, 0.4147924929649243, 0.0, 0.9382028859601423, 0.8756597961457425, 0.965266610679491, 0.010467176426706453, 0.4342701538336483, 0.3917412023665201, 0.0] | [nan, 0.8209592404927408, 0.8860938595226477, 0.41218836114746504, 0.7196016259460952, 0.4954368536125842, nan, 0.545313357840212, 0.6491223200313668, 0.12371625097650668, 0.8633659080664855, 0.4708871648638746, 0.0, 0.03281733746130031, 0.0, 0.5802203868677137, 0.0, 0.0, 0.7907500494259085, 0.06952381605757291, 0.447113968783744, 0.44327869995554786, 0.0, nan, 0.08728984775236309, 0.38119151688382136, 0.37855655092920265, 0.0, 0.8832564638909316, 0.7526222693644393, 0.9416404778849121, 0.009589327157183334, 0.18190330268981955, 0.32252322488728213, 0.0] | |
| 0.1524 | 10.5 | 4200 | 0.5353 | 0.5128 | 0.4237 | 0.8872 | [nan, 0.9268517790355991, 0.9602839791773874, 0.537267523364486, 0.8456677302072528, 0.6567083558655384, nan, 0.7076703913792123, 0.8633391848934858, 0.3143875056961763, 0.9515964493686976, 0.6206264921379765, 0.0, 0.7490196078431373, 0.08954470929499306, 0.8721747743066831, 0.0, 0.005131830440133009, 0.9147190737070242, 0.11450520703985165, 0.6915674424660561, 0.5259122991900205, 0.0019833510251969382, nan, 0.2044761773994233, 0.5593918459203433, 0.4851432496510159, 0.0, 0.9463960710558084, 0.8834918590669917, 0.9670624325154579, 0.012832069294210286, 0.5599179011969355, 0.44183701402816805, 0.0] | [nan, 0.8497898154944094, 0.8911284588944798, 0.4558941463477496, 0.7715538102169041, 0.5041805687956784, nan, 0.5916295134976238, 0.6664176289411136, 0.25352865518566153, 0.8836310493548173, 0.5013133395398324, 0.0, 0.6053882725832013, 0.05452311472892029, 0.5946321429362145, 0.0, 0.005111887747118043, 0.802846410488875, 0.09434940383618455, 0.47282749487636766, 0.44441582446257716, 0.001977936260307555, nan, 0.14078808047194072, 0.4107132907440319, 0.42875046507529324, 0.0, 0.8865359213150946, 0.7513094837462199, 0.9478585417349973, 0.011508324602586469, 0.19474424489161243, 0.34180230893483227, 0.0] |
| 0.052 | 10.75 | 4300 | 0.5611 | 0.5030 | 0.4222 | 0.8855 | [nan, 0.932148839850802, 0.9568949634271852, 0.5225233644859814, 0.8511642191077112, 0.6031687568751455, nan, 0.7201923889006668, 0.8793424111590834, 0.1743029951530718, 0.9511564170902311, 0.5728369144644768, 0.018116900290928325, 0.7155830753353973, 0.08790515827973262, 0.8945492628434111, 0.0, 0.0, 0.9018928482213427, 0.19409261742744086, 0.6978142148450815, 0.5187192887865012, 0.004106374657802112, nan, 0.18591239873678428, 0.5679096666143298, 0.48372515565797347, 0.0, 0.9465148790940053, 0.8887757437702006, 0.9729464658947179, 0.03061668531642422, 0.3269727082444268, 0.4968253657882534, 0.0] | [nan, 0.8544673632153686, 0.8915093314898118, 0.4824501321862451, 0.7281104549174552, 0.4796578889108752, nan, 0.5955885392390377, 0.6806501724220245, 0.15806082007550856, 0.8869557339277052, 0.5018390970394144, 0.017487873372478938, 0.5719234576047509, 0.08299595141700405, 0.5743453150410742, 0.0, 0.0, 0.7988127196821454, 0.14769412965284384, 0.4636640495670947, 0.44194705232908676, 0.004079706927175844, nan, 0.14373978216098007, 0.4138202592132837, 0.4263783910470499, 0.0, 0.8825003483580057, 0.7459231292221788, 0.9497549296351595, 0.022555788364877087, 0.19864442770898405, 0.36609089056617755, 0.0] |
| 0.0897 | 11.0 | 4400 | 0.5797 | 0.4966 | 0.4137 | 0.8864 | [nan, 0.9266090680496935, 0.9675701132103213, 0.5286179906542056, 0.8135055236213754, 0.6141498963415911, nan, 0.7310209435363914, 0.8153911847037054, 0.24547412900285845, 0.9446611067589995, 0.6598542850086441, 0.0, 0.5599071207430341, 0.13658721150208097, 0.8912937585243879, 0.0, 0.004870002356452753, 0.9252981123672058, 0.10847033891289591, 0.6586394910124014, 0.4795176884335903, 0.01181630258673669, nan, 0.18618701084717837, 0.5559088292248914, 0.4992355587068755, 0.0, 0.9406880436912528, 0.9118086274033954, 0.9573602602596679, 0.003960483235940155, 0.3327033672702148, 0.4804871031358067, 0.0] | [nan, 0.8565575968459415, 0.8928102104157912, 0.43275555700074025, 0.7654702047573079, 0.47074416606474334, nan, 0.6054622841435586, 0.6863363711152467, 0.21403286978508218, 0.8828456438079144, 0.4322928605137194, 0.0, 0.4530688935281837, 0.09709521247982786, 0.5749041704195555, 0.0, 0.004865289040020926, 0.7951008940737603, 0.09395592969976839, 0.4548604901862724, 0.41665801557197046, 0.011736958934517204, nan, 0.1216732767438939, 0.41094472698150475, 0.430227229329769, 0.0, 0.8867287999971621, 0.7466484878252573, 0.9415279772911855, 0.0036285882442284325, 0.19204917359734425, 0.36246293958863207, 0.0] |
| 0.0936 | 11.25 | 4500 | 0.5731 | 0.5011 | 0.4193 | 0.8864 | [nan, 0.9324196276009762, 0.9569564158641476, 0.5246004672897197, 0.8364710008894733, 0.6578250088383729, nan, 0.7038215792022807, 0.8665369834416663, 0.21309913418120055, 0.9410960435297098, 0.49318761834197744, 0.028167151547209734, 0.5808565531475748, 0.11010215664018161, 0.8849288822497889, 0.0, 0.0565548660749352, 0.9216694582309478, 0.11269226311693903, 0.6871508134702065, 0.5262584704743466, 0.01969383764456115, nan, 0.2076616778799945, 0.571397916993772, 0.476856262879174, 0.0, 0.9377623285515337, 0.907275545210859, 0.973954665451519, 0.050830950308757096, 0.38818102379646, 0.4678081196891568, 0.0] | [nan, 0.858380886499719, 0.8914561596816896, 0.45129869803574746, 0.786844102694609, 0.48464472942061587, nan, 0.6094618696875397, 0.6854209198991233, 0.18657623184200503, 0.8857526637100221, 0.394797106941035, 0.023946037099494097, 0.49684424239749303, 0.062077792789589706, 0.5615273263032089, 0.0, 0.055464256368118324, 0.7962485307269822, 0.09311408578835408, 0.4733745462314789, 0.44196131097098196, 0.019312422955759485, nan, 0.14722087024238295, 0.4185961804636968, 0.4181839379748557, 0.0, 0.8886792481667263, 0.7473472827679579, 0.9501856968302422, 0.031198480139267574, 0.2030701847638892, 0.3556589318498682, 0.0] |
| 0.033 | 14.25 | 5700 | 0.5935 | 0.5181 | 0.4292 | 0.8880 | [nan, 0.9232290780535377, 0.9550432923803572, 0.5331775700934579, 0.8469649770868216, 0.6796985960845084, nan, 0.7591958688611619, 0.8564643924657209, 0.21028211607771655, 0.9524029393967549, 0.6051700008232486, 0.0, 0.6860681114551084, 0.21654685332324378, 0.8960592972657011, 0.0, 0.03558243657214673, 0.9155229117646998, 0.140697693670425, 0.711005584058588, 0.5227324249145294, 0.037180848092072186, nan, 0.2080186736235068, 0.5726225990474695, 0.5346435930956549, 0.0, 0.9410130186192625, 0.9154633602859255, 0.9760592954761752, 0.01645064030834266, 0.4608913003718832, 0.4701447510293469, 0.0] | [nan, 0.8573293198744064, 0.8916240779976521, 0.48186665258934697, 0.7676170029872194, 0.4823511054134466, nan, 0.6260715377125842, 0.6901341142647419, 0.1894206549118388, 0.8862935130575381, 0.49201833941300493, 0.0, 0.5435813573180703, 0.1092586700604518, 0.5822497006272321, 0.0, 0.035439538946984116, 0.8016860332567224, 0.11209233305853257, 0.4701563285996208, 0.45173968006036097, 0.03573442156415282, nan, 0.1250185671139278, 0.43006031638093856, 0.44816121842496287, 0.0, 0.8878007481353359, 0.7386750898148962, 0.9519721480330992, 0.013876810802543318, 0.25855582662623405, 0.3720678838361397, 0.0] |
| 0.0548 | 14.5 | 5800 | 0.5902 | 0.5151 | 0.4174 | 0.8882 | [nan, 0.9249082282350853, 0.9577153821767257, 0.5438259345794393, 0.8625692959476665, 0.6265525664540941, nan, 0.7491911978889274, 0.8432461925321441, 0.249306102158333, 0.951930364538209, 0.6013830575450728, 0.0, 0.7704850361197111, 0.20002522386177324, 0.8704780151977658, 0.0, 0.0013615060351373288, 0.9208633435979287, 0.11193893938641368, 0.6970564096712325, 0.4979168453686571, 0.03908039555282418, nan, 0.18904297679527668, 0.5623985973726906, 0.5131506060136048, 0.0, 0.9399214361687687, 0.9123994793332818, 0.9756660223299524, 0.04515831571967342, 0.4303481070535878, 0.49404040291178064, 0.0] | [0.0, 0.8607762479438139, 0.8922939816555095, 0.45337232891467816, 0.7416336434657338, 0.4957900790517687, nan, 0.6227225352163122, 0.6905205002583658, 0.2142437565638406, 0.8883435707029895, 0.4944664432937354, 0.0, 0.5822804554671658, 0.1227364185110664, 0.6143083859952676, 0.0, 0.0013572770933389015, 0.7986526753983755, 0.09318127002721979, 0.47663610300281495, 0.44101175423554057, 0.037423427761281866, nan, 0.14246983588236511, 0.42780903014161104, 0.4432599000899573, 0.0, 0.8868797486244817, 0.7354235169834137, 0.9525392249964284, 0.03855126495647117, 0.2526545610728006, 0.37165059315614124, 0.0] |
| 0.1047 | 14.75 | 5900 | 0.5997 | 0.5159 | 0.4159 | 0.8881 | [nan, 0.9210892560336101, 0.9617335675034919, 0.5317464953271028, 0.8683264925417152, 0.6381114337134347, nan, 0.7416693813461018, 0.862755610380984, 0.2719665271966527, 0.9489817238040484, 0.570408331275212, 0.0005289605924358636, 0.6938596491228071, 0.22575356287047546, 0.8948821198934858, 0.0, 0.011022962322938758, 0.9258684979714679, 0.17593834335005545, 0.6548460763101033, 0.4725421838812847, 0.04097994301357618, nan, 0.22218865851984074, 0.5752629926205056, 0.5366821032106535, 0.0, 0.936931478673554, 0.9021336855923136, 0.9725860103434604, 0.020141738157403954, 0.43632262391026033, 0.4934216774582814, 0.0] | [0.0, 0.8607109591035689, 0.8928295853674818, 0.4670190706507743, 0.7523185639791471, 0.4845338501499847, nan, 0.6282224979925543, 0.6928170564904808, 0.23142272983643541, 0.8873278318309525, 0.46953884728763595, 0.0005215803885773895, 0.5542412002308136, 0.10845198424719782, 0.5869154300379641, 0.0, 0.010907018316536697, 0.793456051943224, 0.12649239962384984, 0.4589822701689517, 0.42143872921678477, 0.03893105461493551, nan, 0.13440869146302972, 0.4245448084603441, 0.46174816509389, 0.0, 0.8878226827336242, 0.7447736277446672, 0.951929183073613, 0.018382891806658124, 0.25878028202964926, 0.37484668044597425, 0.0] |
| 0.1363 | 15.0 | 6000 | 0.6052 | 0.5193 | 0.4155 | 0.8887 | [nan, 0.9281772418265013, 0.9663767872895684, 0.5342161214953272, 0.8447924129735698, 0.6015187219527939, nan, 0.7291077408868643, 0.8812164919106135, 0.23211400637971746, 0.9479408328730995, 0.633386844488351, 0.0030415234065062154, 0.789422084623323, 0.21314163198385672, 0.8954179385594596, 0.0, 0.0066242505171104655, 0.9164480291997693, 0.1360949684597427, 0.6964961019847766, 0.4960711090960334, 0.03860550868763618, nan, 0.19802279280516272, 0.5609541005914063, 0.5661075535662848, 0.0, 0.9376398917610389, 0.9059173441584945, 0.9782134208899593, 0.041454266650089104, 0.43892377410636263, 0.49969692229478707, 0.0] | [0.0, 0.8633930449091305, 0.8952460293484353, 0.42706756384454103, 0.7593774610091322, 0.47377891058119026, nan, 0.6217821374684249, 0.6898326802726141, 0.20124995510218743, 0.8868864734587292, 0.4952526552944963, 0.0028388052332757345, 0.6066698390038862, 0.10356026717323365, 0.5863739068024136, 0.0, 0.00656256484747873, 0.7990222508044155, 0.11130896362146828, 0.4768559231889487, 0.4358850122678166, 0.03689958080794596, nan, 0.14020726799012267, 0.42208907144066693, 0.46374312526092243, 0.0, 0.889531203939725, 0.7432560391610733, 0.952160090573041, 0.03558025789239662, 0.21245893254116582, 0.3712419453581397, 0.0] |
| 0.0804 | 15.25 | 6100 | 0.6205 | 0.5110 | 0.4268 | 0.8877 | [nan, 0.9338093608996594, 0.9656453309931633, 0.5360116822429907, 0.8032054069910557, 0.6059132718486427, nan, 0.7301936126609202, 0.8766143189258433, 0.22587928248891834, 0.9574923159422327, 0.619350456902939, 0.0011901613329806928, 0.7703818369453045, 0.07655442048177576, 0.8504335260115607, 0.0, 0.020239310868483754, 0.9198111518664089, 0.12485306048113379, 0.7319227623900414, 0.495000428884777, 0.03547684228169171, nan, 0.1875600713991487, 0.5538912440466844, 0.5455451906671689, 0.0, 0.9362906678973961, 0.9101525873385327, 0.9729007364591106, 0.02293143105806291, 0.4597532971610884, 0.48345782331547454, 0.0] | [nan, 0.856464729269542, 0.8942823604125036, 0.4347924144963024, 0.7282825257603309, 0.4836585626064097, nan, 0.6163747573889081, 0.6892970262677814, 0.20072891932188414, 0.888225522138808, 0.5066929332727181, 0.0011893749174045195, 0.6024777046931117, 0.05147557666214383, 0.6220782459974346, 0.0, 0.020031615227137266, 0.7981944383082095, 0.09975989363883506, 0.476298280003313, 0.4345003764655265, 0.03419217618393775, nan, 0.1330243066375818, 0.42041703246719714, 0.45861972618049734, 0.0, 0.8892991369897043, 0.7440154875361404, 0.9524152608652374, 0.021443727473549588, 0.22949422815524131, 0.36944182958821886, 0.0] |
| 0.0627 | 15.5 | 6200 | 0.6244 | 0.5088 | 0.4226 | 0.8864 | [nan, 0.9363099227676078, 0.9557843398515034, 0.5258376168224299, 0.8250218829308421, 0.6537759869721766, nan, 0.7370216777925434, 0.8573990605873701, 0.24421061352997225, 0.944441326435564, 0.6453651107269285, 0.0, 0.574406604747162, 0.202547610039097, 0.9001834773007729, 0.0, 0.08682219254837274, 0.9295308868150898, 0.08372655176410206, 0.6741101275248591, 0.4846229490117269, 0.03799094921503995, nan, 0.18766991624330634, 0.5747971947453813, 0.5357957944650019, 0.0, 0.9393777953152539, 0.9065412893119918, 0.9711350422513085, 0.01408833768494343, 0.423479444817005, 0.43092900998340755, 0.0] | [nan, 0.8597774723874926, 0.8905873458192073, 0.4468008441348313, 0.7358981742624778, 0.4808541172889169, nan, 0.6284059730270303, 0.6908370828825592, 0.2063894967177243, 0.8877064612239235, 0.5085303752716421, 0.0, 0.4786515887689728, 0.07696731524968849, 0.5910784632525015, 0.0, 0.08625308882819613, 0.7927730663764808, 0.07191564097641445, 0.4573643410852713, 0.43199170940310977, 0.036449399656946824, nan, 0.12474672799956191, 0.42888997799442735, 0.45055805027110624, 0.0, 0.8884059722861457, 0.7421115189770542, 0.9513756980737487, 0.012830765528906378, 0.21910649885920366, 0.3464300992446894, 0.0] |
| 0.0906 | 15.75 | 6300 | 0.6277 | 0.5077 | 0.4232 | 0.8874 | [nan, 0.9291486180310576, 0.9587963707454238, 0.5362032710280373, 0.8561640657502444, 0.6342631999714216, nan, 0.7070024940578683, 0.8671632585282536, 0.2429056713202701, 0.9448969225566771, 0.5583271589692929, 0.0010579211848717272, 0.6710010319917441, 0.23294236347584815, 0.9067513151912711, 0.0, 0.020684418610740187, 0.9250756288677204, 0.07677279425156046, 0.6503387447644879, 0.5319197495312902, 0.03860550868763618, nan, 0.18569270904846905, 0.5416470403517035, 0.5072344951363807, 0.0, 0.9414354322663816, 0.9037269864207472, 0.9731874869200364, 0.013277591280202247, 0.39988619967892053, 0.4915501377118052, 0.0] | [nan, 0.8573471144295101, 0.892101583588469, 0.4449642809016976, 0.7400242676373722, 0.48442379031764893, nan, 0.6140014998720169, 0.6924650683478314, 0.21178574008524165, 0.8871035802257583, 0.4782118177972077, 0.00099601593625498, 0.5315565729234794, 0.08438028233359221, 0.5871221081515825, 0.0, 0.020441960358122443, 0.7966462351239197, 0.06850549580427845, 0.4652701824381677, 0.4532145005879428, 0.03686906413403052, nan, 0.1488673139158576, 0.4142177021859072, 0.4423489401170992, 0.0, 0.888882064716084, 0.7468477974750474, 0.9515378343546987, 0.012387656809223801, 0.2237051521076804, 0.3671609871108074, 0.0] |
| 0.0798 | 16.0 | 6400 | 0.6190 | 0.5286 | 0.4172 | 0.8869 | [nan, 0.926680657145317, 0.9583277241233551, 0.5414509345794393, 0.8395448350384849, 0.6163055970613488, nan, 0.729106879083869, 0.8763296484319401, 0.26653962467376446, 0.94462856417892, 0.6354449658351856, 0.0, 0.7736326109391125, 0.21591625677891285, 0.8849045268558811, 0.34363411619283063, 0.10316026497002069, 0.9218656576332847, 0.10944717627775294, 0.7009902670312324, 0.5122599776979916, 0.038968657466897594, nan, 0.1919538651654538, 0.5525226356832574, 0.538875717356141, 0.0, 0.9457572762531493, 0.901183634297817, 0.9780756945897774, 0.023115338389489825, 0.3853969802271942, 0.4585034944719744, 0.0] | [0.0, 0.8564334135192141, 0.8938306198574103, 0.41026489890361634, 0.7353951913707414, 0.47809949912634986, nan, 0.6215698951590981, 0.6951678039270297, 0.23431724238396126, 0.8861469346690092, 0.5033256170323759, 0.0, 0.5823655078656049, 0.06725329981143935, 0.60684460181721, 0.013995167136528394, 0.10232968859569384, 0.80017144909153, 0.09089721553798556, 0.48491411153457703, 0.44620918590626235, 0.03736540418921091, nan, 0.14435885256397019, 0.42539846918525115, 0.4624629192971781, 0.0, 0.8873440144497453, 0.7475156108906514, 0.9524719380738451, 0.01972869725160058, 0.22189851053623036, 0.35861227450389216, 0.0] |
| 0.0901 | 16.25 | 6500 | 0.5917 | 0.5200 | 0.4299 | 0.8896 | [nan, 0.9258199912150333, 0.9603701848856869, 0.5186892523364486, 0.8721793039773063, 0.647948819969426, nan, 0.7465402918754385, 0.8815201404374436, 0.21442478975931065, 0.9491194402298921, 0.6424219972009549, 0.00039672044432689763, 0.7311661506707946, 0.1943498549627948, 0.8921543157758005, 0.15327564894932014, 0.07967428586390177, 0.9293905669893677, 0.12015927416016821, 0.6698895330720515, 0.5201315450880439, 0.040560925191351474, nan, 0.17654812577234655, 0.5835060449050087, 0.5231215794021847, 0.0, 0.9400508616673928, 0.8957790972168599, 0.9722137189382809, 0.011464420406979153, 0.38557987360035767, 0.46186248931546336, 0.0] | [nan, 0.866351138156412, 0.8939541036386832, 0.46360912979965524, 0.7507890322152613, 0.48660598648618647, nan, 0.6225598103833513, 0.6911588008377322, 0.19347001326929186, 0.887840691207522, 0.5082802755206722, 0.00036527456471447707, 0.5638678869876641, 0.0832837918175431, 0.6045529063562446, 0.006450606044842116, 0.07925304719241588, 0.7975401296695107, 0.09911841629051973, 0.4713279486495917, 0.45141671341630396, 0.03856573705179283, nan, 0.12819285757013818, 0.4279405668488608, 0.45535903716704923, 0.0, 0.8891564381205536, 0.7534260714863522, 0.9520390401591446, 0.010587073054631307, 0.21693992819738858, 0.3621346900827125, 0.0] |
| 0.0653 | 16.5 | 6600 | 0.6069 | 0.5188 | 0.4270 | 0.8875 | [nan, 0.9290124922971863, 0.9589720557965155, 0.5377873831775701, 0.8408719669628694, 0.6464453726960179, nan, 0.7621001449552638, 0.8857807088295299, 0.2068851236588094, 0.9480908117204224, 0.6177862846793447, 0.0, 0.7590299277605779, 0.18791777021061926, 0.9075956355134117, 0.0, 0.058230565810488834, 0.9227427600247443, 0.14023410983625556, 0.6694696680432973, 0.503836987023172, 0.03972288954690206, nan, 0.19629273650968007, 0.5403046004082274, 0.5528350801001529, 0.0, 0.9376581699207615, 0.901014031526811, 0.9752275577414824, 0.015813440258609972, 0.5130362332093723, 0.44827147941026946, 0.0] | [nan, 0.8616804147441266, 0.8938918495590652, 0.4436595217282778, 0.7588707802865634, 0.4758728817247983, nan, 0.628730181301102, 0.688001179245283, 0.18745190773792766, 0.8877420745200684, 0.49290617097441625, 0.0, 0.5890833366705378, 0.07141145458902469, 0.5823605098793022, 0.0, 0.05773773981671383, 0.7947286013642479, 0.11004573329175761, 0.45664170004530313, 0.44804481905654414, 0.037985842126352344, nan, 0.1362925675933341, 0.4181863845162963, 0.46249953657361065, 0.0, 0.888743313770925, 0.7487091113564399, 0.952506386954324, 0.013629087889199198, 0.23068137169799252, 0.34552559761867596, 0.0] |
| 0.0946 | 16.75 | 6700 | 0.6065 | 0.5143 | 0.4299 | 0.8883 | [nan, 0.9366806425081413, 0.9542471674446813, 0.5289754672897197, 0.8420186089455377, 0.6348452391657562, nan, 0.7554582292706217, 0.8872989514636808, 0.24603338994987364, 0.95065695923075, 0.5426442743064132, 0.0, 0.6714138286893705, 0.17089166351368396, 0.8694632071182697, 0.0, 0.019113450108658656, 0.9217120922782911, 0.13903375883706684, 0.6740194249750934, 0.5118203708015244, 0.03178948544611431, nan, 0.20950157901963476, 0.5704453865075627, 0.5623407413972658, 0.0, 0.9411122045154043, 0.9100815747962009, 0.9743145830094165, 0.0857785237680799, 0.4308967871730781, 0.48645508025274165, 0.0] | [nan, 0.8651947384722789, 0.8930717543250574, 0.4526545293143849, 0.7524401466986995, 0.4887861010723328, nan, 0.6214073859834178, 0.6850152009083916, 0.21553648224427951, 0.8870252213407757, 0.45774305555555556, 0.0, 0.5674414547991802, 0.07292395457725634, 0.6296601151175575, 0.0, 0.018957592126106943, 0.7990749594007368, 0.11146433406780111, 0.4733450112755498, 0.44892412444043184, 0.03086520206129645, nan, 0.14343460931037075, 0.423674789416196, 0.4623610858079796, 0.0, 0.8878002154581935, 0.7401265142858424, 0.9527410923966566, 0.060905676756307404, 0.2440383021821195, 0.37124052036090577, 0.0] |
| 0.0849 | 17.0 | 6800 | 0.6239 | 0.5140 | 0.4277 | 0.8874 | [nan, 0.9305970330977147, 0.9554562297838712, 0.5320046728971962, 0.8489963736857462, 0.6542095907740937, nan, 0.7229605001215142, 0.8664610713099588, 0.28969717055387545, 0.9528962660454964, 0.4980859471474438, 0.0, 0.7176470588235294, 0.20759238239374447, 0.8862034811976359, 0.0, 0.031864477783887096, 0.9191836449171626, 0.12003509991887283, 0.6955934653201726, 0.5165258494982048, 0.04092407397061288, nan, 0.19217355485376905, 0.5895090804417229, 0.503489840686003, 0.0, 0.9408365537389992, 0.904218558679801, 0.9778653391859837, 0.011972108251481619, 0.48105021439167633, 0.4599672061542931, 0.0] | [nan, 0.8636437394553574, 0.8929500733790351, 0.4345244853931126, 0.7599993804727837, 0.46696218452852767, nan, 0.6206510046358703, 0.6983976442693793, 0.2497009515987931, 0.8874926753329814, 0.43156730923551545, 0.0, 0.5706314364255529, 0.11078207026517702, 0.6145475017593244, 0.0, 0.03131271548397056, 0.8003820861050736, 0.10237293400828867, 0.4670301606353909, 0.4459244664251144, 0.038865601952565394, nan, 0.13528195016335132, 0.4290314962729347, 0.43912572952498746, 0.0, 0.8877216097613865, 0.738180307717246, 0.9528556585267144, 0.010467599586006663, 0.24685847767824554, 0.3594826033565289, 0.0] |
| 0.0623 | 17.25 | 6900 | 0.6172 | 0.5119 | 0.4289 | 0.8887 | [nan, 0.9328785695913208, 0.9578098581195325, 0.5317383177570093, 0.8561058685577084, 0.6304827168234579, nan, 0.7396010541574238, 0.8636618114532428, 0.2868801524503915, 0.9518605630620964, 0.4947929529925084, 0.0009256810367627612, 0.7112487100103199, 0.18766553159288688, 0.8812836916282393, 0.0, 0.01743775037310502, 0.9291997485832975, 0.11260120200665574, 0.6826961479212292, 0.49109604568235565, 0.042125258394323704, nan, 0.18536317451599615, 0.5637959909980635, 0.5345549622210897, 0.0, 0.9375897612200349, 0.9104269853176398, 0.9785152351649676, 0.016857308632765553, 0.471885224247597, 0.4792468588859031, 0.0] | [nan, 0.8649230898296971, 0.8934913832615394, 0.4476893494179728, 0.7525214888224941, 0.47904609433387446, nan, 0.6239313691633799, 0.6925921698436251, 0.24592492631130367, 0.887597908356459, 0.43200359389038634, 0.000914435009797518, 0.5808680994521702, 0.10441372535260683, 0.6200052546206393, 0.0, 0.01701975415910659, 0.7967171468468032, 0.09773096322694678, 0.46324810420871126, 0.4373241271317872, 0.03999681722939819, nan, 0.13242564545240523, 0.42549338304851775, 0.45084188297733174, 0.0, 0.888754441570771, 0.7411121674604253, 0.9532170914369867, 0.015176070871411481, 0.2681904277926638, 0.37097400203468917, 0.0] |
| 0.087 | 17.5 | 7000 | 0.5958 | 0.5165 | 0.4323 | 0.8903 | [nan, 0.9358029442279695, 0.9581817889436154, 0.5173516355140186, 0.8565989717971686, 0.667348278703771, nan, 0.7453587599689061, 0.8783982540209707, 0.2597456398359501, 0.9499820544177967, 0.5674240553223018, 0.0, 0.7777605779153767, 0.14150586454786226, 0.8944761966616873, 0.0, 0.04935459377372817, 0.9190064859631538, 0.13516780079140384, 0.6902990697136872, 0.5223050718688348, 0.039750824068383706, nan, 0.1931621584511877, 0.5658763803841524, 0.501960958099754, 0.0, 0.9402762475045608, 0.9019702878007346, 0.9759436269037568, 0.012736230262339924, 0.4254506289499888, 0.5057514930417828, 0.0] | [nan, 0.8672982432946728, 0.8947683772895187, 0.45221659685446863, 0.7622893195763734, 0.4902560352855047, nan, 0.6223052874324095, 0.6932109212359029, 0.22966612333107453, 0.8909383965244376, 0.46376665320952765, 0.0, 0.5938460326215428, 0.08434187777193114, 0.602773750581284, 0.0, 0.048440150074523305, 0.8000458716174862, 0.11235893201211121, 0.479082966550413, 0.45730325325150806, 0.03797907547774101, nan, 0.13441877352901832, 0.42968388297967464, 0.43185024209844064, 0.0, 0.8885136898541194, 0.7448990572757507, 0.9530770665482792, 0.011476439106252173, 0.27282086031874275, 0.3826734258440253, 0.0] |
| 0.0493 | 17.75 | 7100 | 0.6044 | 0.5187 | 0.4325 | 0.8897 | [nan, 0.9240685866116948, 0.9622943353488201, 0.5353317757009346, 0.853514520592762, 0.6373741840672775, nan, 0.7478235165354141, 0.8836883806993405, 0.21751108165209826, 0.9509281473980792, 0.5420474191158311, 0.0, 0.7930340557275541, 0.22083490982469417, 0.8908310060401377, 0.0, 0.0858534286387558, 0.9207060529378274, 0.1411447209390884, 0.681761326480902, 0.5542661781464825, 0.03930387172467736, nan, 0.1931621584511877, 0.5752080389386088, 0.49312002836187985, 0.0, 0.9390712329452002, 0.9078367511279274, 0.9729394719810368, 0.022296821252434828, 0.4083602593021602, 0.5050154471862657, 0.0] | [nan, 0.8665364871726114, 0.892965816013915, 0.4547348114599635, 0.7642413653965189, 0.4857421136997843, nan, 0.6253954022706847, 0.6870444418213474, 0.19578268327242895, 0.8874360309454634, 0.462182366980205, 0.0, 0.6077345881608605, 0.08939146416173167, 0.6003337345442609, 0.0, 0.0839241381075478, 0.8010272384750775, 0.11626241894020498, 0.4793339806464354, 0.46760060321222136, 0.03759519038076152, nan, 0.13732648718299134, 0.4276941756073643, 0.42612058896739236, 0.0, 0.8882284916106664, 0.7388891943971531, 0.9525770980335972, 0.01913195000088903, 0.25993428881875097, 0.3840528604415517, 0.0] |
| 0.0609 | 18.0 | 7200 | 0.6040 | 0.5216 | 0.4331 | 0.8892 | [nan, 0.9227158454479248, 0.9619075870212453, 0.5316542056074767, 0.8629644863429278, 0.6514016366079864, nan, 0.7428586694795917, 0.8715519286425962, 0.2045030862918928, 0.9466966687245525, 0.5841977442990038, 0.005950806664903465, 0.7702786377708978, 0.22789759112120064, 0.8969036175878418, 0.0, 0.10873720315241013, 0.9154051507310187, 0.16112021722213943, 0.6850397847716271, 0.5074181749114659, 0.04494664506397005, nan, 0.19590827955512838, 0.5833045480713874, 0.5258912942323458, 0.0, 0.940934664449275, 0.8882331527914135, 0.9774381724580755, 0.014391396245182146, 0.43477819098132453, 0.5255548975681157, 0.0] | [nan, 0.8627327541149343, 0.8943888286230383, 0.44826842363954605, 0.7637335274754071, 0.48244240753868006, nan, 0.625331534198079, 0.6944541055496749, 0.18654700047236655, 0.8893611006867107, 0.4845014167207183, 0.005280450598451068, 0.5995903120857935, 0.10169968482665466, 0.5777541863213714, 0.0, 0.10625831542319107, 0.8006913747953047, 0.12712606139777924, 0.4783386384345389, 0.44333322627096416, 0.042293134265587215, nan, 0.148674558186062, 0.4270657907089471, 0.4375414792419438, 0.0, 0.8881646826265218, 0.746841100561318, 0.9521439225045568, 0.01294715575036877, 0.24666520631333802, 0.38409386690619945, 0.0] |
| 0.0594 | 18.25 | 7300 | 0.6184 | 0.5184 | 0.4328 | 0.8884 | [nan, 0.9404973526006469, 0.9537239028155554, 0.5275303738317757, 0.8254461719223712, 0.6778219046293364, nan, 0.7472383523016173, 0.8659581534373962, 0.2943783918140768, 0.9543757743601257, 0.5650160533465053, 0.0, 0.7537667698658411, 0.19283642325640055, 0.8840439696044684, 0.0, 0.053517660304244236, 0.9223867864255677, 0.14299077799301313, 0.6933990487935829, 0.5170742093202789, 0.040644728755796417, nan, 0.19868186187010847, 0.5769927251792537, 0.5184906162061554, 0.005237711522965351, 0.936523983230326, 0.8965774712364731, 0.9780089834131267, 0.013717932777984998, 0.4056981446483367, 0.5054707620798113, 0.0] | [nan, 0.8646951423015076, 0.8916557550473645, 0.4456280068092665, 0.7798208455321158, 0.4668012972723517, nan, 0.6275296552822227, 0.693191442493572, 0.24416726797924612, 0.8882015249296725, 0.4734908589168679, 0.0, 0.6010533245556287, 0.10449699289229086, 0.6037870806764625, 0.0, 0.0522041170761608, 0.8024731726060429, 0.12131790023739622, 0.47577199080928667, 0.44858497899759875, 0.038707102952913006, nan, 0.1414826837710464, 0.42720162129381883, 0.43218883327484625, 0.005164878823996822, 0.8886286814206171, 0.7396195316490108, 0.952706951959097, 0.011655776057680246, 0.24503522596165647, 0.3835704565398948, 0.0] |
| 0.0616 | 18.5 | 7400 | 0.6177 | 0.5082 | 0.4272 | 0.8887 | [nan, 0.9388723599691342, 0.9564944313754319, 0.5251226635514019, 0.8417103211148066, 0.6482573931295971, nan, 0.7321895483979944, 0.8855861839920293, 0.2417250093210158, 0.9506753528629689, 0.5459990121017535, 0.0, 0.656656346749226, 0.11275066212637155, 0.8765912190686498, 0.0, 0.07320713219699945, 0.9230813488667519, 0.11395056209539893, 0.703570900866502, 0.5234722511549255, 0.043466115425442764, nan, 0.1751201427982974, 0.5677919087245512, 0.4888879041013937, 0.00040290088638195, 0.9391572478144832, 0.8977247029883181, 0.9766107386702634, 0.018289713622611795, 0.4217114755430917, 0.4846827041793997, 0.0] | [nan, 0.8641564182971058, 0.8921133993393542, 0.4501424016407233, 0.7647378890792713, 0.4769587373086239, nan, 0.6209624017506187, 0.6859163987138264, 0.20884410959394406, 0.8903311694707657, 0.45434149683164926, 0.0, 0.5354933726067747, 0.07164035579774021, 0.6122940826221327, 0.0, 0.06951938138690669, 0.8003213370838211, 0.09716584900998836, 0.4828652554046836, 0.45382137270368395, 0.04121417598135297, nan, 0.13381035314854062, 0.43221966358833797, 0.42342013855571975, 0.00040160642570281126, 0.8881950211846364, 0.7398417591158966, 0.9530845970447974, 0.014810386777414213, 0.2365547272188405, 0.37402163767775426, 0.0] |
| 0.0611 | 18.75 | 7500 | 0.6099 | 0.5177 | 0.4324 | 0.8902 | [nan, 0.9345079533755389, 0.9638643589649342, 0.5356553738317757, 0.8422997643013702, 0.6257334001805861, nan, 0.7471220088972541, 0.8814537173221996, 0.2763370479307345, 0.9466207360377004, 0.6049436074750967, 0.0, 0.7059855521155831, 0.14970361962416445, 0.8782149119958433, 0.0, 0.0958028958186055, 0.9234898906602255, 0.14089637245649764, 0.6854742792438918, 0.5173606430820885, 0.04232080004469523, nan, 0.19343677056158176, 0.5813811692050034, 0.5071015488245331, 0.00040290088638195, 0.9400356746670351, 0.8951641148114238, 0.9764509546423178, 0.03372756848605413, 0.4723729399093662, 0.4701335776577261, 0.0] | [nan, 0.8647971283970989, 0.8977857991553266, 0.4345779290016539, 0.7684148484664771, 0.4855945598832977, nan, 0.6259089780170273, 0.686933822387541, 0.2366516479228013, 0.8888089337936385, 0.48289741736216074, 0.0, 0.5985650538104821, 0.061681563084597796, 0.6094675222969052, 0.0, 0.09345866005976859, 0.7993214394154491, 0.11438556403104944, 0.4762232900770807, 0.45242021144786737, 0.04009209272785011, nan, 0.14212501513256123, 0.43339055459103054, 0.4277836968915307, 0.00040032025620496394, 0.8873505568836287, 0.7422385564869821, 0.9528040989243474, 0.029041136219678652, 0.23652292476444373, 0.3661642120469451, 0.0] |
| 0.0526 | 19.0 | 7600 | 0.6228 | 0.5108 | 0.4297 | 0.8909 | [nan, 0.9405315503656566, 0.9623814025398809, 0.5330642523364486, 0.8317861268903274, 0.6622725273804787, nan, 0.7263120519701678, 0.8674004839398396, 0.27552922656282364, 0.9455175897361646, 0.5819338108174859, 0.0, 0.6111971104231166, 0.16710808424769832, 0.8864145612781711, 0.0, 0.0827900400596968, 0.930233313789279, 0.11843739134753886, 0.6995346374019279, 0.5042107294717365, 0.042153192915805354, nan, 0.18371550185363175, 0.5630920605013869, 0.5005871795439941, 0.0056406124093473006, 0.9407823912509976, 0.8985265242187241, 0.9751204970628252, 0.012990074184591156, 0.42681216850576115, 0.4687243361620586, 0.0] | [nan, 0.8642299686902748, 0.8983701844671692, 0.4505770666371748, 0.7744797343632894, 0.49247659714013137, nan, 0.623426329007179, 0.696151825084343, 0.23867367627796818, 0.8898312419634539, 0.48430193720774883, 0.0, 0.5244863620262132, 0.07708866651151966, 0.5993412927130506, 0.0, 0.08080962968642183, 0.7977044198782267, 0.10166926045153175, 0.47672785170429793, 0.4451483954200063, 0.04006265597621197, nan, 0.1264172335600907, 0.43160647951283304, 0.42598284151975113, 0.00554016620498615, 0.8878311660408268, 0.74270285241124, 0.9536917187049466, 0.011887351052557973, 0.24007269734586106, 0.3689853153957455, 0.0] |
| 0.054 | 19.25 | 7700 | 0.6199 | 0.5112 | 0.4157 | 0.8897 | [nan, 0.9383711032345364, 0.9577791893332354, 0.532998831775701, 0.8352225138198671, 0.6740592830016223, nan, 0.7513879337239024, 0.8669212886084358, 0.21351340154935997, 0.9451751851979368, 0.5077796986910348, 0.0, 0.7028895768833849, 0.18400807163576743, 0.8914236539585634, 0.0, 0.1072709658838007, 0.9291372462420467, 0.11183132171062435, 0.6577470949582549, 0.5160479493180732, 0.04262807978099335, nan, 0.1900590416037347, 0.5664154498351389, 0.5106689415257805, 0.0012087026591458502, 0.9410463493811095, 0.8949234994980861, 0.9775344732695309, 0.011246839902192383, 0.42160986811355644, 0.47790186427705494, 0.0] | [0.0, 0.8647432445871411, 0.896112476860621, 0.45036567465468447, 0.76789556797279, 0.4910576591298745, nan, 0.6249728507663073, 0.6958387758910245, 0.19385049365303245, 0.8887827463711233, 0.4413911550021468, 0.0, 0.5792159197210647, 0.08409221902017291, 0.5936591009850886, 0.0, 0.10176353700943865, 0.7979000623472865, 0.09749989173896098, 0.46787846117983983, 0.45133395403669296, 0.04032236755185625, nan, 0.1322593590552084, 0.4340972401884397, 0.4265909006774516, 0.0011904761904761906, 0.8880726081330668, 0.743872268803543, 0.953516990645358, 0.009541850530053972, 0.23069652626428858, 0.3703797514940341, 0.0] |
| 0.0671 | 19.5 | 7800 | 0.6217 | 0.5094 | 0.4146 | 0.8892 | [nan, 0.9331891438463118, 0.9574927175990591, 0.5350619158878505, 0.834028291700058, 0.6744756411977813, nan, 0.7431025597272566, 0.8738719931679082, 0.2327354074319566, 0.9446516741270925, 0.5379723388490986, 0.0, 0.669969040247678, 0.18249463992937318, 0.8913668247061116, 0.0, 0.09954703741523316, 0.9238793920053711, 0.0888259739399659, 0.6886532573187448, 0.5368212898403323, 0.03941560981060394, nan, 0.18061238500617877, 0.5652404877793479, 0.5268662338525626, 0.0060435132957292505, 0.9420171078199074, 0.9042006331836784, 0.9732816357580515, 0.009485473911061379, 0.3114064500396269, 0.49469125180868956, 0.0] | [0.0, 0.8617017485872825, 0.8957626230741332, 0.4508312580591182, 0.7683050299189929, 0.4878950714613818, nan, 0.624948812708509, 0.6911476098809349, 0.20973251451290761, 0.8882723484572987, 0.46124933827421916, 0.0, 0.5501928047798635, 0.07156988821841923, 0.5965012359764214, 0.0, 0.09680704791974334, 0.7988314631673791, 0.07901907356948229, 0.4711932405689982, 0.46080549284533756, 0.03769502030348365, nan, 0.13494050061551088, 0.43071416464770335, 0.43780380026513477, 0.005912495072920773, 0.8877312783085815, 0.7390862578001592, 0.9533931934816451, 0.008087813065948142, 0.20454363437358178, 0.3783462459982845, 0.0] |
| 0.0512 | 19.75 | 7900 | 0.6300 | 0.5080 | 0.4263 | 0.8887 | [nan, 0.9391756156362827, 0.957153465687716, 0.531875, 0.8363349452907067, 0.6442373192444947, nan, 0.7406369413577534, 0.8858234094036154, 0.26463399478023114, 0.9530349257345309, 0.5036634559973656, 0.0, 0.6101651186790505, 0.1925841846386682, 0.8746996168084692, 0.0, 0.0674207315476658, 0.9178750280173988, 0.11324690806139175, 0.6909895794473874, 0.5175153479480927, 0.042963294038773116, nan, 0.2016476726623644, 0.5813497671010625, 0.5020052735370366, 0.008058017727639, 0.9412167663408764, 0.897734355178538, 0.9747767193057303, 0.01633407932363546, 0.3496514865166941, 0.49998742995692663, 0.0] | [nan, 0.8625082043880324, 0.8957494129402008, 0.43782876705742063, 0.7496431303023787, 0.48514174134060595, nan, 0.6274006504670441, 0.6871961161760971, 0.2302687309626372, 0.8882991958037961, 0.4373045513839996, 0.0, 0.5170981283890153, 0.08045310853530031, 0.6189258899694966, 0.0, 0.06474078543772313, 0.7999986290910134, 0.09763826734899257, 0.47261393142851427, 0.4453505921742053, 0.040873817370043586, nan, 0.1437999373335422, 0.43193558986563074, 0.42771380026430056, 0.007840062720501764, 0.887320160440498, 0.7455157136812743, 0.9534156947680599, 0.013436060460141392, 0.21404224616226705, 0.3788044726196485, 0.0] |
| 0.0535 | 20.0 | 8000 | 0.6326 | 0.5129 | 0.4292 | 0.8889 | [nan, 0.9375849538350132, 0.9591767441005661, 0.5300221962616822, 0.8259597228240738, 0.6596635135950806, nan, 0.7492101575548236, 0.8658110736822129, 0.2693152160404325, 0.9484445354169388, 0.5863176092862435, 0.0, 0.6744066047471621, 0.20784462101147685, 0.883142820029876, 0.0, 0.07781530646977194, 0.9271092315337143, 0.10147518998658918, 0.678314629589805, 0.497267391277709, 0.043242639253589586, nan, 0.18442949334065634, 0.576354215732454, 0.5145022268507234, 0.007252215954875101, 0.939646591781763, 0.9018448093278766, 0.9767371671098836, 0.012725869285921506, 0.41707817675628445, 0.45857891473041446, 0.0] | [nan, 0.8619435562270654, 0.8965635233177199, 0.4407369269775891, 0.7663725441548623, 0.48239880840583743, nan, 0.6305089171096815, 0.6940516487277982, 0.23291892085557667, 0.8902205646366161, 0.48581173260572985, 0.0, 0.5452649144764289, 0.09688988182726792, 0.6044686963431372, 0.0, 0.07672845562038519, 0.7962772336784573, 0.08572747363415112, 0.4690486788330029, 0.43758222088032955, 0.04117568825641708, nan, 0.13543326140878018, 0.4322105242501251, 0.4339781328847771, 0.007067137809187279, 0.8877484539815808, 0.7395098273111396, 0.9530623665306688, 0.010661406489721605, 0.2371072088724584, 0.3613527133617203, 0.0] |
| 0.0467 | 20.25 | 8100 | 0.6268 | 0.5170 | 0.4303 | 0.8886 | [nan, 0.9395265086570245, 0.956900821509961, 0.5300023364485982, 0.8314043061203785, 0.6477819071422676, nan, 0.7464739330448017, 0.8916828770697918, 0.24499772152947513, 0.9451416993546665, 0.549950605087676, 0.0, 0.687203302373581, 0.1523521251103544, 0.8917889848671819, 0.0, 0.08004084518105412, 0.915062008738324, 0.1551515753572079, 0.6881485415176292, 0.526278382981852, 0.04472316889211688, nan, 0.18451187697377455, 0.5879677605066206, 0.549156898805699, 0.007655116841257051, 0.940224100990058, 0.9054685173132715, 0.9762965505479732, 0.02776741680135936, 0.449734804608913, 0.49033782689095345, 0.0] | [nan, 0.8644696780108341, 0.8944980656632955, 0.440104340976533, 0.7641389998117053, 0.4770745740308388, nan, 0.6297284505666034, 0.6844286473848664, 0.21773065311832707, 0.8890008282328474, 0.46004855121119775, 0.0, 0.5750680081177943, 0.06133536430566133, 0.6000371448704572, 0.0, 0.07885979620791951, 0.8006806868947128, 0.1252363801594355, 0.4706566275608475, 0.45444853884552, 0.04241284306453322, nan, 0.13328969033307544, 0.4323046138453842, 0.45063456852976475, 0.007448059584476676, 0.888463849852071, 0.7450400534159003, 0.9535229169698916, 0.021638336996913712, 0.23653075402126864, 0.371412309599829, 0.0] |
| 0.0566 | 20.5 | 8200 | 0.6333 | 0.5121 | 0.4287 | 0.8890 | [nan, 0.9382327153916955, 0.9575874232706021, 0.5340771028037383, 0.8342787755625269, 0.6541523107263972, nan, 0.7406429739787204, 0.8870285144944726, 0.2079415054476159, 0.9479172512933317, 0.5500535111550177, 0.0, 0.7218266253869969, 0.17152226005801488, 0.8854728193803988, 0.0, 0.06920116251669153, 0.9246219694901651, 0.12077186708389212, 0.6759797704055135, 0.5097310892447952, 0.045561204536566285, nan, 0.1750377591651792, 0.5736405505835558, 0.5156101127827879, 0.00684931506849315, 0.9398823262828916, 0.9029458484550981, 0.9765633952545758, 0.017017903767251024, 0.4133390233493873, 0.48943837047548283, 0.0] | [nan, 0.8643736263008805, 0.8951902105356352, 0.44089650982245326, 0.7609522214327652, 0.4848458703216258, nan, 0.6265179780801705, 0.6811413623628766, 0.1878590542487696, 0.887796763348636, 0.46558542236468475, 0.0, 0.5934331650617232, 0.06971498872257535, 0.6047629609093429, 0.0, 0.06810626948746361, 0.7983954196511591, 0.10178182731484066, 0.4720678124715856, 0.44954610542241913, 0.0431413003227001, nan, 0.12741374485267662, 0.432512153928718, 0.4367328553732968, 0.006685017695635077, 0.8879940574069723, 0.7494547941207608, 0.9536808104413358, 0.013580974233357105, 0.23932508912918143, 0.374424364423531, 0.0] |
| 0.0445 | 20.75 | 8300 | 0.6446 | 0.5134 | 0.4274 | 0.8856 | [nan, 0.9405399334753671, 0.9458917035764169, 0.5273960280373832, 0.8282526135651365, 0.6846166732980127, nan, 0.7372879749180856, 0.8847701285761731, 0.2182567629147852, 0.9486374327394391, 0.565180703054252, 0.0, 0.6657378740970072, 0.14856854584436877, 0.8831509384945119, 0.0, 0.06705417223051345, 0.9206841150299712, 0.12586301097700292, 0.6806553405515008, 0.5199094440427905, 0.04444382367730041, nan, 0.17805849237951393, 0.5833280996493432, 0.5248720391748466, 0.007252215954875101, 0.9356924613611799, 0.9010464353082633, 0.9759161892423923, 0.023617845745783083, 0.4449998983925705, 0.5172488924395381, 0.0] | [nan, 0.8666434932726657, 0.8860462410088557, 0.4516813574923211, 0.7742782740775649, 0.4555874524449895, nan, 0.6267926037830955, 0.6896407624091181, 0.1957204153277486, 0.8882182070612508, 0.46149838666308146, 0.0, 0.5469962267350659, 0.06421718273004798, 0.6011771207515888, 0.0, 0.06543011164763292, 0.79986647852113, 0.10526898843730527, 0.4713830230218466, 0.45188595346756627, 0.04203767801939388, nan, 0.1276553855846278, 0.42972506139948413, 0.441923808813104, 0.007075471698113208, 0.8884781477624152, 0.7456781431206605, 0.9535186762124032, 0.016432559463950374, 0.2430653450400151, 0.37996353686275436, 0.0] |
| 0.0523 | 21.0 | 8400 | 0.6334 | 0.5087 | 0.4256 | 0.8903 | [nan, 0.933221079502352, 0.9637948085900169, 0.5297546728971962, 0.8356436570172051, 0.6448230539257773, nan, 0.7465713167832686, 0.8749679745694359, 0.2327354074319566, 0.9465962111947419, 0.5354408495924919, 0.0, 0.6270897832817337, 0.14024467145920042, 0.8939972072481652, 0.009888751545117428, 0.05998481397114654, 0.9259419692666467, 0.10259275815824766, 0.6911110038285254, 0.5109028637249255, 0.044248282026928876, nan, 0.19286008512975422, 0.5704035170356414, 0.5006314949812767, 0.0, 0.9387582194599503, 0.9072224581646499, 0.9775237134023292, 0.011000766712254964, 0.4426019630555386, 0.48799979887931083, 0.0] | [nan, 0.8627899844290204, 0.898045292380419, 0.4429741700156492, 0.7733528050732301, 0.48122023215814036, nan, 0.6285033134107889, 0.6922586045743415, 0.2067303269489062, 0.888126363728484, 0.4555339601828019, 0.0, 0.512374046123361, 0.062230678829257376, 0.5926462119703566, 0.00044943820224719103, 0.05796624750145485, 0.8002256522783529, 0.08795100349163994, 0.4798915494731881, 0.45172247073689, 0.0420103434557751, nan, 0.13598869181318254, 0.4315342675118884, 0.4297071129707113, 0.0, 0.8889534278458562, 0.7430008362351238, 0.9537407288817968, 0.009678051537276564, 0.23964350552896518, 0.3711983987778357, 0.0] |
| 0.0715 | 21.25 | 8500 | 0.6366 | 0.5151 | 0.4287 | 0.8894 | [nan, 0.9370145031789949, 0.9615540919282511, 0.5349906542056074, 0.8234293246215806, 0.6427307923986297, nan, 0.7520265297434068, 0.877506286473407, 0.2407929077426571, 0.9458038701145451, 0.5871614390384458, 0.0, 0.6843137254901961, 0.1972505990667171, 0.8854890563096707, 0.054388133498145856, 0.06252454638284502, 0.9220868993644009, 0.11473699895693637, 0.6793299129694406, 0.505244648130675, 0.04341024638247947, nan, 0.19102018399011397, 0.5753257968283875, 0.5107132569630631, 0.0, 0.9400241164189752, 0.9050651936505135, 0.9789779094546415, 0.014533859670935389, 0.41945579060740923, 0.49523735034665384, 0.0] | [nan, 0.8636190041686136, 0.8961979040679402, 0.44008160621637177, 0.7735135302856915, 0.47552992149378714, nan, 0.6295369121222396, 0.6946632262523146, 0.2137970353477765, 0.8882677382290695, 0.4793581450054608, 0.0, 0.555406650473239, 0.08438545376065609, 0.5980720618958058, 0.002378506946321423, 0.06108823002737203, 0.7997681127577295, 0.0970839783417272, 0.47365876347968716, 0.44734126160727244, 0.041260653691952316, nan, 0.13688871396241267, 0.4310366799265186, 0.42952982613070945, 0.0, 0.8887487055026462, 0.7433844306901257, 0.9533070831491001, 0.012093141544284045, 0.23472485984284203, 0.3736148179836323, 0.0] |
| 0.0856 | 21.5 | 8600 | 0.6332 | 0.5104 | 0.4282 | 0.8891 | [nan, 0.9354302285089335, 0.9598914301992207, 0.5326285046728972, 0.8348257505275104, 0.6418013774311685, nan, 0.7519851631996333, 0.8757413294112065, 0.2316790256431501, 0.9473149777460632, 0.5441672841030707, 0.0, 0.6676986584107327, 0.19119687224114013, 0.8908797168279535, 0.0, 0.05576938182389443, 0.9230974918555517, 0.1150019040050332, 0.6832652332737915, 0.5057945396840957, 0.04410860941952064, nan, 0.19250308938624194, 0.5698984665305908, 0.50395515277747, 0.0040290088638195, 0.9408126308534799, 0.8986623443239606, 0.9766785258336341, 0.01867306975009325, 0.40035359385478264, 0.4951898635172656, 0.0] | [nan, 0.8652175117062043, 0.8949487144681932, 0.4437434730009742, 0.7611759319446382, 0.47865894832193984, nan, 0.6331643341293494, 0.6931150372692965, 0.2068423485899214, 0.8889820786499946, 0.4611976486594917, 0.0, 0.5675936485656636, 0.08603859250851305, 0.595085736597217, 0.0, 0.05421502748930971, 0.799696203512091, 0.09667497111998775, 0.4707822447654798, 0.4485026865801383, 0.041887733446519526, nan, 0.13581323258742614, 0.4329091328339933, 0.42695701145109816, 0.003957261574990107, 0.8887286680634571, 0.7476012702986532, 0.953293396822863, 0.014771330218834523, 0.23667139184546263, 0.3740649694565481, 0.0] | |
| 0.0426 | 22.25 | 8900 | 0.6388 | 0.5153 | 0.4321 | 0.8907 | [nan, 0.9365843032790866, 0.9619280328787767, 0.5323341121495327, 0.832118008177492, 0.6589330390083284, nan, 0.7530012289310712, 0.8876025999905109, 0.2356145656406645, 0.9495151391383951, 0.5967728657281633, 0.0, 0.6851909184726522, 0.16698196493883213, 0.8856433071377541, 0.0, 0.046160291152829054, 0.9249913955800083, 0.14087981589099158, 0.6780864102710397, 0.5070796622838727, 0.043214704732107936, nan, 0.19390361114925167, 0.577557963050191, 0.5263122908865303, 0.009266720386784852, 0.9401577082628303, 0.9045005405226523, 0.9759350190099954, 0.014261884039951924, 0.44343514397772765, 0.48190053464583205, 0.0] | [nan, 0.8638275353000382, 0.8975929370440341, 0.44847327680807825, 0.7680456934961463, 0.4896127563059361, nan, 0.6344922288860472, 0.6906430201049919, 0.21071058091286307, 0.8908914064913077, 0.4893922260291313, 0.0, 0.5741773684438103, 0.0915502696722445, 0.6133303348044865, 0.0, 0.045543787135107205, 0.799706519605589, 0.11493135050077327, 0.47303106132662764, 0.44896719237169413, 0.04119511090991399, nan, 0.13769769301273427, 0.43323479414732197, 0.4435750434181777, 0.008966861598440545, 0.8892865533176849, 0.7464162172003368, 0.9537521470921787, 0.012501163611760084, 0.24370386088743454, 0.37164396457569027, 0.0] |
| 0.0544 | 22.5 | 9000 | 0.6275 | 0.5126 | 0.4297 | 0.8902 | [nan, 0.9362912936349177, 0.962198079008307, 0.5305654205607476, 0.829452734049054, 0.6501778145136554, nan, 0.7606583485441561, 0.8785880343502396, 0.2379137495339492, 0.9477460490242178, 0.5748332921709064, 0.0, 0.6779153766769865, 0.15399167612561482, 0.8968792621939339, 0.0, 0.062053255832220565, 0.9268894385323623, 0.11712114438980778, 0.6830882170073133, 0.515366328868847, 0.046119894966199226, nan, 0.1939585335713305, 0.5666535824566913, 0.5097161596242051, 0.0064464141821112, 0.9399919952412273, 0.8983810519232679, 0.9745475341343337, 0.015694289029798168, 0.43490011989676686, 0.47604289457365206, 0.0] | [nan, 0.8648796447130465, 0.8972780355218145, 0.44448663694053075, 0.7723828909831303, 0.4856595115662902, nan, 0.6367705951823552, 0.693571040656192, 0.2097133467226584, 0.8885713515050402, 0.47493538294109644, 0.0, 0.5753448653382964, 0.07485745815707191, 0.589861603519713, 0.0, 0.060925449871465295, 0.7986432258569581, 0.09907840555757864, 0.4719490094091225, 0.45171147174755927, 0.04363338442835245, nan, 0.13716960245479792, 0.4304074481173985, 0.4370060790273556, 0.00631163708086785, 0.8878797422918536, 0.748175287257327, 0.9535688641919678, 0.013234083170064194, 0.2360317635381052, 0.36728912241605793, 0.0] |
| 0.0701 | 22.75 | 9100 | 0.6508 | 0.5132 | 0.4302 | 0.8902 | [nan, 0.9420095059141509, 0.9626173339520694, 0.5384521028037383, 0.8237863722622742, 0.6345902505663333, nan, 0.7493342571861443, 0.8728092233240025, 0.24462488089813164, 0.9462424874982255, 0.5649748909195687, 0.0, 0.6890092879256966, 0.18148568545844368, 0.8978859518087939, 0.0, 0.06417406331003063, 0.926905788482557, 0.10334608188877299, 0.6837845785184178, 0.5068636881640055, 0.044555561763226996, nan, 0.19329946450638474, 0.5856309206050139, 0.5353969555294587, 0.008058017727639, 0.9389002783925003, 0.9000722535382172, 0.9752872750044519, 0.01801255750341912, 0.4159604950313967, 0.4749814242696805, 0.0] | [nan, 0.8667971887550201, 0.8964523921395798, 0.43883250929953793, 0.7789739251684871, 0.4822597903246794, nan, 0.6338344499902683, 0.6949882507612449, 0.21506355392067597, 0.8897027195058894, 0.47454492022058187, 0.0, 0.5744214058332616, 0.09034404821697639, 0.5890266504761296, 0.0, 0.06334315397736083, 0.7983683031468644, 0.08797806890816708, 0.47160166966502776, 0.4468892814313033, 0.04230993686667728, nan, 0.13598253612549263, 0.43447527412791603, 0.442910823939144, 0.007836990595611285, 0.8890303591865106, 0.7479650947941834, 0.9538041433738902, 0.014260666277030976, 0.23761100470137558, 0.3677322595225377, 0.0] |
| 0.0588 | 23.0 | 9200 | 0.6510 | 0.5156 | 0.4306 | 0.8898 | [nan, 0.9386450845503147, 0.9615407102293612, 0.5321039719626168, 0.8252994992682097, 0.646236577683447, nan, 0.7500099107344458, 0.8891493096740523, 0.2356145656406645, 0.948320024675765, 0.5611467852144563, 0.0, 0.7061919504643963, 0.15790137470046664, 0.8929012145223095, 0.0, 0.06268164323305318, 0.9247904360655894, 0.12226195797943674, 0.6746470281016981, 0.5158947761834156, 0.04522599027878652, nan, 0.1926953178635178, 0.5791620871931753, 0.5486694289955906, 0.014504431909750202, 0.9393220200484532, 0.9030809791181759, 0.9764800062837624, 0.014337001118985454, 0.46371598691296306, 0.476005184444432, 0.0] | [nan, 0.8636880663267268, 0.8963496684957871, 0.4393286431075093, 0.7694031519559503, 0.48618816019454364, nan, 0.6323091767222339, 0.6843731284418411, 0.20910695246148756, 0.8901931512501616, 0.4713865836791148, 0.0, 0.594294150853272, 0.07763859605605854, 0.5971841386537511, 0.0, 0.061455525606469004, 0.799169285452784, 0.10285033809898536, 0.4708681854568623, 0.4517361674617981, 0.04280237937871778, nan, 0.1379100253532753, 0.432983014903532, 0.45285296269202635, 0.013830195927775643, 0.8892098290384068, 0.7459428984706676, 0.9536680185853351, 0.012051498108992573, 0.23353802067342136, 0.36591936147117593, 0.0] |
| 0.067 | 23.25 | 9300 | 0.6275 | 0.5128 | 0.4311 | 0.8905 | [nan, 0.9372797021893622, 0.9638153118797325, 0.5312441588785046, 0.8278251787794161, 0.6422768634184979, nan, 0.7515353020360958, 0.8786212459078616, 0.24139359542648825, 0.9490656742280216, 0.5420885815427677, 0.0, 0.7038183694530443, 0.17707150964812712, 0.8822822627784633, 0.0, 0.06734218312256172, 0.9252767953435341, 0.10501829500488419, 0.6879495810858851, 0.5059293320425944, 0.04416447846248394, nan, 0.19404091720444872, 0.5719029674988224, 0.5293478983403869, 0.008058017727639, 0.9393905631474131, 0.9031768115782158, 0.9770540451989742, 0.01500269385386879, 0.4205734723322969, 0.4884174036436365, 0.0] | [nan, 0.8641485198316792, 0.897149130251509, 0.4431534355853929, 0.7712457425720085, 0.4882715323914724, nan, 0.6318488634618116, 0.69528994349434, 0.21461061083181407, 0.890398769558611, 0.46117346313448776, 0.0, 0.5855585129217824, 0.08629909644108427, 0.608788204714529, 0.0, 0.0658912742737101, 0.7992632312490636, 0.09043857647998176, 0.47160302909046053, 0.44752081120336445, 0.04198645598194131, nan, 0.13798894682367646, 0.43383933729163815, 0.44664223751121745, 0.007836990595611285, 0.8889539638268134, 0.7463182889742939, 0.9538402391601662, 0.01284986599932556, 0.2406063988095238, 0.3716953276213374, 0.0] |
| 0.0513 | 23.5 | 9400 | 0.6472 | 0.5144 | 0.4306 | 0.8897 | [nan, 0.938401309042541, 0.9600648179629494, 0.5333469626168225, 0.832045261686822, 0.6450022850427629, nan, 0.7455948939896135, 0.883593490534706, 0.23551099879862464, 0.9506135691239773, 0.5523380258500041, 0.0, 0.6968524251805985, 0.18312523647370413, 0.8904413197376112, 0.0, 0.06160814808996413, 0.9256348385566595, 0.12978691700193712, 0.6801915871922148, 0.5208407367015084, 0.04416447846248394, nan, 0.1951942880681038, 0.5735463442717329, 0.5357736367463606, 0.010072522159548751, 0.9380115028759878, 0.9056712133078884, 0.9770508172388136, 0.017681006258029756, 0.4195573980369445, 0.4783152790270228, 0.0] | [nan, 0.8645788687513425, 0.8959992534632647, 0.44551363683824813, 0.7647562903055005, 0.48403962995403316, nan, 0.6342904860496079, 0.6900071507171095, 0.2094308344078099, 0.8896775711392028, 0.4683431642874594, 0.0, 0.5778034484233945, 0.08829968377523717, 0.5990191205946445, 0.0, 0.060376680693831467, 0.7987594181280973, 0.10780592458123607, 0.47080665968645763, 0.45253694794349175, 0.04196862307876085, nan, 0.13750677087363616, 0.4326699094290159, 0.44833404409174343, 0.009754194303550527, 0.8891644113783483, 0.7456061236432407, 0.9539508207140677, 0.014409173235161254, 0.23587072008774035, 0.3678274990977986, 0.0] |
| 0.0514 | 23.75 | 9500 | 0.6439 | 0.5126 | 0.4298 | 0.8893 | [nan, 0.9377822895762951, 0.9605358193045652, 0.5385, 0.8340916008081545, 0.6271635536295225, nan, 0.7452691324573968, 0.884822318166722, 0.22701851775135673, 0.9488086350085531, 0.537766526714415, 0.0, 0.6666150670794634, 0.20002522386177324, 0.8838085341300254, 0.0, 0.05781164087660042, 0.9238019884436897, 0.11829666054073742, 0.6694155391023081, 0.5142496967171933, 0.043549918989887706, nan, 0.19379376630509407, 0.5833176322813628, 0.5375905696749462, 0.014101531023368252, 0.9389680151020606, 0.9049790133806934, 0.9761012589582619, 0.02082556260101952, 0.414029953870227, 0.5005852053386369, 0.0] | [nan, 0.863411965165267, 0.894931428278196, 0.4402552004737254, 0.7611011560258087, 0.4837046157587918, nan, 0.6314089786667951, 0.6898753375504013, 0.2022476056909819, 0.8895664124405706, 0.4596777031068576, 0.0, 0.5673444293179922, 0.08523215821152193, 0.6083079089415631, 0.0, 0.056674965989886805, 0.7993862287218525, 0.09987768652804473, 0.4710007534678047, 0.450200875376809, 0.041379127295891285, nan, 0.1393342283999368, 0.4316562226473846, 0.44881423656073105, 0.013539651837524178, 0.8892954904899649, 0.7457058534465373, 0.9537927510495554, 0.016624966398544282, 0.24126375122858124, 0.37717282181124784, 0.0] |
| 0.0396 | 24.0 | 9600 | 0.6535 | 0.5114 | 0.4293 | 0.8894 | [nan, 0.9355970923117436, 0.9613217787436595, 0.5374941588785047, 0.8288621111896686, 0.642493049404965, nan, 0.7527694039253403, 0.878070882952982, 0.22343510501677782, 0.9446323372316829, 0.5478719025273731, 0.0, 0.6478844169246646, 0.1983856728465128, 0.8865769305708905, 0.0, 0.07386170240620009, 0.92611209153323, 0.1052169737909568, 0.6754384809956214, 0.5089943264670923, 0.04279568690988323, nan, 0.19272277907455718, 0.5795022766525357, 0.533735126631362, 0.008058017727639, 0.9392768622420797, 0.9018779025514876, 0.9758392561919, 0.014779932860872808, 0.4110833384137048, 0.4900487159002665, 0.0] | [nan, 0.8639528354166897, 0.8950065886128323, 0.44207385913246505, 0.7660355663095111, 0.48472638815638147, nan, 0.632634318964356, 0.6931134697057083, 0.20094633110411506, 0.8905903659512103, 0.4648726053472574, 0.0, 0.5535911115030201, 0.08658556723729839, 0.604755865918694, 0.0, 0.0724857392466211, 0.7980282230680995, 0.09017126154632008, 0.4707250951496855, 0.44738482499754295, 0.04074793201585233, nan, 0.13850404578646142, 0.43285457950063133, 0.4469182529964006, 0.007840062720501764, 0.8885988668670501, 0.746866946124605, 0.9537924535842215, 0.012023161337086795, 0.24114295250810605, 0.37191019096397804, 0.0] |
| 0.0572 | 24.25 | 9700 | 0.6468 | 0.5169 | 0.4312 | 0.8893 | [nan, 0.9401996856733055, 0.9583929096522826, 0.5344988317757009, 0.8275082400146594, 0.6494017622545427, nan, 0.7543103076809053, 0.8711154338852778, 0.24802187331703882, 0.9453213909924968, 0.5670947559068082, 0.0, 0.7040763673890609, 0.20204313280363223, 0.8891017730726765, 0.0, 0.06668761291336109, 0.9255172844843733, 0.1113677378764549, 0.6754443327730256, 0.5202249807001851, 0.044248282026928876, nan, 0.19305231360703007, 0.5827890301983566, 0.55261350291374, 0.014101531023368252, 0.9394324953961886, 0.9048990380903004, 0.9755035483352065, 0.0154197231547101, 0.45343331504399603, 0.47399118420979125, 0.0] | [nan, 0.863689319961114, 0.895499199129711, 0.4429491151299229, 0.765606502579043, 0.48571154804691785, nan, 0.6324972973597951, 0.6956526681114833, 0.21654760828284655, 0.8900625950293436, 0.47545424740738185, 0.0, 0.5803666368933691, 0.08725014977397745, 0.5992339680455242, 0.0, 0.06544361365913821, 0.7982999807741021, 0.09452243441114062, 0.4717078672807595, 0.4521680319629779, 0.04200588718873478, nan, 0.13927135130851676, 0.4339583670272156, 0.4507663389242337, 0.01348747591522158, 0.8884945203133995, 0.7465496843182982, 0.9537005332798949, 0.012399112712579277, 0.24028127759471044, 0.3662329926099869, 0.0] |
| 0.1 | 24.5 | 9800 | 0.6434 | 0.5135 | 0.4300 | 0.8895 | [nan, 0.9377224102212196, 0.9606645248290818, 0.5361588785046729, 0.8331230894215592, 0.6375564947567199, nan, 0.7494747310743753, 0.8814869288798216, 0.23789303616554125, 0.9491298161249899, 0.5208281880299662, 0.0, 0.7291537667698659, 0.1923319460209358, 0.8872670000649477, 0.0, 0.058754221977849345, 0.9251466166261608, 0.10029967383565953, 0.684280516653427, 0.5108906098741529, 0.04338231186099782, nan, 0.1931896196622271, 0.581302663945151, 0.5429748953047794, 0.014101531023368252, 0.939044218900316, 0.9053540699149504, 0.9762874046608516, 0.016517986655062374, 0.4174033205307972, 0.4717006430275368, 0.0] | [nan, 0.8641608155359141, 0.8958643122776131, 0.4417664033758718, 0.7644541831979321, 0.4846296892790795, nan, 0.6335999382179972, 0.6905137105945841, 0.21054850773630565, 0.8890883354259757, 0.44958072768618534, 0.0, 0.6023700925018117, 0.08546290069491146, 0.6030192343768966, 0.0, 0.057282891713891865, 0.7981027891830667, 0.08634672672073433, 0.470738722708764, 0.44815859378883993, 0.04122753457750405, nan, 0.1376066035521477, 0.4340720968586592, 0.4532255678035067, 0.01352918438345574, 0.888563607775072, 0.7458284701692807, 0.9538944088343424, 0.01350879014029907, 0.2349899322716456, 0.3667384437299315, 0.0] |
| 0.0547 | 24.75 | 9900 | 0.6482 | 0.5155 | 0.4313 | 0.8898 | [nan, 0.9397340904212859, 0.9603330836947732, 0.5307733644859813, 0.8309005858255233, 0.6429241895489165, nan, 0.7515697741559071, 0.8821369265075675, 0.23520029827250508, 0.948613379528076, 0.5628961883592657, 0.0, 0.7383384932920537, 0.19170134947660486, 0.8888176268104176, 0.0, 0.06747309716440185, 0.9241314709843229, 0.1176757893342605, 0.6804680836745651, 0.509839842170402, 0.04290742499580982, nan, 0.19313469724014828, 0.5775631967341812, 0.5366821032106535, 0.009669621273166801, 0.9403802717370998, 0.9035215326574961, 0.9734618635336802, 0.012358054623067678, 0.41701721229856326, 0.48626373626373626, 0.0] | [nan, 0.8640778611527823, 0.8958137823018933, 0.4460626314967881, 0.7641756445447411, 0.4858917928580605, nan, 0.6328187132466054, 0.6908867956078256, 0.20850548118768247, 0.8893168906380365, 0.47044860327507915, 0.0, 0.6030682345007797, 0.08536927829261444, 0.6011740028114567, 0.0, 0.06583048076431819, 0.7992350659678636, 0.09887388797306791, 0.4713607906006725, 0.44755617108819296, 0.040873892333484124, nan, 0.13801020408163264, 0.4335135793399971, 0.45185060816356987, 0.0093603744149766, 0.8886009280250379, 0.7464543006342957, 0.9536265277974683, 0.010431767147039596, 0.2352570275599578, 0.3719794479055262, 0.0] |
| 0.0627 | 25.0 | 10000 | 0.6463 | 0.5168 | 0.4317 | 0.8895 | [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0] | [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lfoppiano/MatTPUSciBERT | lfoppiano | 2023-04-23T08:29:58Z | 496 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"arxiv:1810.04805",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-09-21T00:21:49Z | # Material SciBERT (TPU): Improving language understanding in materials science
**Work in progress**
## Introduction
SciBERT-based model pre-trained with materials science scientific fulltext
## Authors
Luca Foppiano
Pedro Ortiz Suarez
## TLDR
- Collected full-text from ~700000 articles provided by the National Institute for Materials Science (NIMS) TDM platform (https://dice.nims.go.jp/services/TDM-PF/en/), dataset called ScienceCorpus (SciCorpus)
- We added to the SciBERT vocabulary (32k tokens), 100 domain-specific unknown words extracted from SciCorpus with a keywords modeler (KeyBERT)
- Starting conditions: original SciBERT weights
- Pre-train the model MatTpuSciBERT from on the Google Cloud with the TPU (Tensor Processing Unit) as follow:
- 800000 steps with batch_size: 256, max_seq_length:512
- 100000 steps with batch_size: 2048, max_seq_length:128
- Fine-tuning and testing on NER on superconductors (https://github.com/lfoppiano/grobid-superconductors) and physical quantities (https://github.com/kermitt2/grobid-quantities)
## Related work
### BERT Implementations
- BERT (the original) https://arxiv.org/abs/1810.04805
- RoBERTa (Re-implementation by Facebook) https://arxiv.org/abs/1907.11692
### Relevant models
- SciBERT: BERT, from scratch, scientific articles (biology + CS) https://github.com/allenai/scibert
- MatSciBERT (Gupta): RoBERTa, from scratch, SciBERT vocab and weights, ~150 K paper limited to 4 MS families http://github.com/m3rg-iitd/matscibert
- MaterialBERT: Not yet published
- MatBERT (CEDER): BERT, from scratch, 2M documents on materials science (~60M paragraphs) https://github.com/lbnlp/MatBERT
- BatteryBERT (Cole): BERT, mixed from scratch and with predefined weights https://github.com/ShuHuang/batterybert/
### Results
Results obtained via 10-fold cross-validation, using DeLFT (https://github.com/kermitt2/delft)
#### NER Superconductors
| Model | Precision | Recall | F1 |
|----------------------|-----------|---------|--------|
| SciBERT (baseline) | 81.62% | 84.23% | 82.90% |
| MatSciBERT (Gupta) | 81.45% | 84.36% | 82.88% |
| MatTPUSciBERT | 82.13% | 85.15% | 83.61% |
| MatBERT (Ceder) | 81.25% | 83.99% | 82.60% |
| BatteryScibert-cased | 81.09% | 84.14% | 82.59% |
#### NER Quantities
| Model | Precision | Recall | F1 |
|----------------------|-----------|---------|----------|
| SciBERT (baseline) | 88.73% | 86.76% | 87.73% |
| MatSciBERT (Gupta) | 84.98% | 90.12% | 87.47% |
| MatTPUSciBERT | 88.62% | 86.33% | 87.46% |
| MatBERT (Ceder) | 85.08% | 89.93% | 87.44% |
| BatteryScibert-cased | 85.02% | 89.30% | 87.11% |
| BatteryScibert-cased | 81.09% | 84.14% | 82.59% |
## References
This work was supported by Google, through the researchers program https://cloud.google.com/edu/researchers
## Acknowledgements
TBA |
EleutherAI/pythia-1b-deduped-v0 | EleutherAI | 2023-07-10T01:32:03Z | 496 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-10-18T03:08:13Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> |
johnowhitaker/sd-class-wikiart-from-bedrooms | johnowhitaker | 2023-02-28T06:18:58Z | 496 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-12-06T10:09:53Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model initialized from https://huggingface.co/google/ddpm-bedroom-256 and trained for 5000 steps on https://huggingface.co/datasets/huggan/wikiart.
Script: https://github.com/huggingface/diffusion-models-class/blob/main/unit2/finetune_model.py
Training Logs (with example images): https://wandb.ai/johnowhitaker/dm_finetune/runs/2upaa341
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('johnowhitaker/sd-class-wikiart-from-bedrooms')
image = pipeline().images[0]
image
```
|
numind/NuNER-multilingual-v0.1 | numind | 2024-03-15T13:03:34Z | 496 | 54 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"token-classification",
"entity-recognition",
"foundation-model",
"mBERT",
"Multilingual Bert",
"BERT",
"generic",
"en",
"fr",
"de",
"it",
"es",
"pt",
"pl",
"nl",
"ru",
"arxiv:2402.15343",
"license:mit",
"region:us"
]
| token-classification | 2023-09-06T12:58:49Z | ---
license: mit
language:
- en
- fr
- de
- it
- es
- pt
- pl
- nl
- ru
pipeline_tag: token-classification
inference: false
tags:
- token-classification
- entity-recognition
- foundation-model
- feature-extraction
- mBERT
- Multilingual Bert
- BERT
- generic
---
# SOTA Entity Recognition Multilingual Foundation Model by NuMind 🔥
This model provides the best embedding for the Entity Recognition task and supports 9+ languages.
**Checkout other models by NuMind:**
* SOTA Entity Recognition Foundation Model in English: [link](https://huggingface.co/numind/entity-recognition-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
## About
[Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) finetunned on an artificially annotated multilingual subset of [Oscar dataset](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201). This model provides domain & language independent embedding for Entity Recognition Task. We fine-tunned it only on 9 languages but the model can generalize over other languages that are supported by the Multilingual BERT.
**Metrics:**
Read more about evaluation protocol & datasets in our [blog post](https://www.numind.ai/blog/a-foundation-model-for-entity-recognition)
| Model | F1 macro |
|----------|----------|
| bert-base-multilingual-cased | 0.5206 |
| ours | 0.5892 |
| ours + two emb | 0.6231 |
## Usage
Embeddings can be used out of the box or fine-tuned on specific datasets.
Get embeddings:
```python
import torch
import transformers
model = transformers.AutoModel.from_pretrained(
'numind/NuNER-multilingual-v0.1',
output_hidden_states=True,
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
'numind/NuNER-multilingual-v0.1',
)
text = [
"NuMind is an AI company based in Paris and USA.",
"NuMind est une entreprise d'IA basée à Paris et aux États-Unis.",
"See other models from us on https://huggingface.co/numind"
]
encoded_input = tokenizer(
text,
return_tensors='pt',
padding=True,
truncation=True
)
output = model(**encoded_input)
# two emb trick: for better quality
emb = torch.cat(
(output.hidden_states[-1], output.hidden_states[-7]),
dim=2
)
# single emb: for better speed
# emb = output.hidden_states[-1]
```
## Citation
```
@misc{bogdanov2024nuner,
title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data},
author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
year={2024},
eprint={2402.15343},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
TheBloke/Xwin-LM-7B-V0.2-GGUF | TheBloke | 2023-10-18T13:12:09Z | 496 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Xwin-LM/Xwin-LM-7B-V0.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-18T13:06:45Z | ---
base_model: Xwin-LM/Xwin-LM-7B-V0.2
inference: false
license: llama2
model_creator: Xwin-LM
model_name: Xwin LM 7B v0.2
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Xwin LM 7B v0.2 - GGUF
- Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM)
- Original model: [Xwin LM 7B v0.2](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Xwin-LM's Xwin LM 7B v0.2](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF)
* [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [xwin-lm-7b-v0.2.Q2_K.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [xwin-lm-7b-v0.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [xwin-lm-7b-v0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [xwin-lm-7b-v0.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [xwin-lm-7b-v0.2.Q4_0.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [xwin-lm-7b-v0.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [xwin-lm-7b-v0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [xwin-lm-7b-v0.2.Q5_0.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [xwin-lm-7b-v0.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [xwin-lm-7b-v0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [xwin-lm-7b-v0.2.Q6_K.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [xwin-lm-7b-v0.2.Q8_0.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.2-GGUF/blob/main/xwin-lm-7b-v0.2.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Xwin-LM-7B-V0.2-GGUF and below it, a specific filename to download, such as: xwin-lm-7b-v0.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Xwin-LM-7B-V0.2-GGUF xwin-lm-7b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Xwin-LM-7B-V0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Xwin-LM-7B-V0.2-GGUF xwin-lm-7b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m xwin-lm-7b-v0.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Xwin-LM-7B-V0.2-GGUF", model_file="xwin-lm-7b-v0.2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Xwin-LM's Xwin LM 7B v0.2
<h3 align="center">
Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
</h3>
<p align="center">
<a href="https://github.com/Xwin-LM/Xwin-LM"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a><a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a>
</p>
**Step up your LLM alignment with Xwin-LM!**
Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated.
## News
- 💥 [Oct 12, 2023] [Xwin-LM-7B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2) and [Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) have been released, with improved comparison data and RL training (i.e., PPO). Their winrates v.s. GPT-4 have increased significantly, reaching **59.83%** (7B model) and **70.36%** (13B model) respectively. The 70B model will be released soon.
- 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**.
- 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release!
- 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models.
- 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models.
## Model Card
| Model | Checkpoint | Report | License |
|------------|------------|-------------|------------------|
|Xwin-LM-7B-V0.2| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-LM-13B-V0.2| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
## Benchmarks
### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/).
The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4.
| **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** |
|----------------------------------|------------|----------|-------------|
| | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**|
| **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** |
| GPT-4 | 95.28 | 84.66 | 50.00 |
| **Xwin-LM-13B-V0.2** | **93.22** | **87.14** | **70.36** |
| WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 |
| Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 |
| **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** |
| ChatGPT | 89.37 | 50.00 | 16.60 |
| **Xwin-LM-7B-V0.2** | **89.31** | **79.60** | **59.83** |
| WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 |
| **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** |
| Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 |
### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/)
| Prompts | GPT4 response | Xwin-LM response |
|----------------|------------|--------|
|I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.|
| Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." |
### Xwin-LM performance on NLP foundation tasks.
The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average |
|------------------|-------------|-------------|-------------------|-------------------|------------|
| Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 |
|Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 |
|Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 |
| WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 |
| WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 |
| WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2|
| Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 |
| Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 |
| Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 |
| **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 |
| **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 |
| **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** |
| **Xwin-LM-7B-V0.2** | 50.0 | 56.4 | 49.5 | 78.9 | 58.7 |
| **Xwin-LM-13B-V0.2** | 56.6 | 61.5 | 43.8 | 82.9 | 61.2 |
## Inference
### Conversation Template
To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations.
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>......
```
### HuggingFace Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
(
prompt := "A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions. "
"USER: Hello, can you help me? "
"ASSISTANT:"
)
inputs = tokenizer(prompt, return_tensors="pt")
samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(output)
# Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you.
```
### vLLM Example
Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vLLM](https://github.com/vllm-project/vllm). Please refer to [vLLM](https://github.com/vllm-project/vllm) for detailed installation instructions.
```python
from vllm import LLM, SamplingParams
(
prompt := "A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions. "
"USER: Hello, can you help me? "
"ASSISTANT:"
)
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1")
outputs = llm.generate([prompt,], sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(generated_text)
```
## TODO
- [ ] Release the source code
- [ ] Release more capabilities, such as math, reasoning, and etc.
## Citation
Please consider citing our work if you use the data or code in this repo.
```
@software{xwin-lm,
title = {Xwin-LM},
author = {Xwin-LM Team},
url = {https://github.com/Xwin-LM/Xwin-LM},
version = {pre-release},
year = {2023},
month = {9},
}
```
## Acknowledgements
Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vLLM](https://github.com/vllm-project/vllm).
<!-- original-model-card end -->
|
maddes8cht/KnutJaegersberg-StableLM-3b-EssayWriter-gguf | maddes8cht | 2023-11-16T18:33:19Z | 496 | 2 | null | [
"gguf",
"dataset:ChristophSchuhmann/essays-with-instructions",
"license:cc-by-sa-4.0",
"region:us"
]
| null | 2023-11-16T15:54:23Z | ---
license: cc-by-sa-4.0
datasets:
- ChristophSchuhmann/essays-with-instructions
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# StableLM-3b-EssayWriter - GGUF
- Model creator: [KnutJaegersberg](https://huggingface.co/KnutJaegersberg)
- Original model: [StableLM-3b-EssayWriter](https://huggingface.co/KnutJaegersberg/StableLM-3b-EssayWriter)
# StableLM
This is a Model based on StableLM.
Stablelm is a familiy of Language Models by Stability AI.
## Note:
Current (as of 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers with these StableLM Models.
The model will crash immediately if -ngl is larger than 34.
The model works fine however without any gpu acceleration.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:

Prompt Example:
```
### Instruction:
Provide the inputted essay that when summarized resulted in the following summary : The organization's learning and development practices are poor as they involve manual orientation from experienced workers. New employees are not advised of their roles in the organization before assignment of tasks. The orientation process takes long due to lack of a well-coordinated orientation programs.
### Response:
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
livingbox/bedroom-model-v3 | livingbox | 2023-12-05T09:57:44Z | 496 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-12-05T09:54:09Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bedroom-model.v3 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
OpenGVLab/InternVL-Chat-V1-2 | OpenGVLab | 2024-05-29T11:00:13Z | 496 | 12 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"internvl_chat",
"feature-extraction",
"visual-question-answering",
"custom_code",
"dataset:laion/laion2B-en",
"dataset:laion/laion-coco",
"dataset:laion/laion2B-multi",
"dataset:kakaobrain/coyo-700m",
"dataset:conceptual_captions",
"dataset:wanng/wukong100m",
"arxiv:2312.14238",
"arxiv:2404.16821",
"license:mit",
"region:us"
]
| visual-question-answering | 2024-02-11T16:48:02Z | ---
license: mit
datasets:
- laion/laion2B-en
- laion/laion-coco
- laion/laion2B-multi
- kakaobrain/coyo-700m
- conceptual_captions
- wanng/wukong100m
pipeline_tag: visual-question-answering
---
# Model Card for InternVL-Chat-V1-2
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/k0tma4PhPFrwJvpS_gVQf.webp" alt="Image Description" width="300" height="300">
</p>
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/)
[\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
We are excited to introduce InternVL-Chat-V1-2. Inspired by [LLaVA-NeXT-34B](https://llava-vl.github.io/blog/2024-01-30-llava-next/), we have also adopted [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) as the language model. Below is the pipeline.
<p align="center">
<img width="600" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/GIEKCvNc1Y5iMQqLv645p.png">
</p>
From the experimental results, **we've observed that a stronger language model (34B) can better leverage the powerful capabilities of our vision foundation model ([InternViT-6B](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)).**
For better training reproducibility, we follow the minimalist design and data efficiency similar to LLaVA-NeXT. To reduce training costs, we provide a pre-trained MLP projector and only employ around 1 million visual instruction tuning samples for SFT. Our model has a total of 40 billion parameters and can be trained within 1.5 days using 32 A100 GPUs. The code, data, and model will be made publicly available.
## Model Details
- **Model Type:** multimodal large language model (MLLM)
- **Model Stats:**
- Architecture: [InternViT-6B-448px-V1-2](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) + MLP + [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
- Image size: 448 x 448 (256 tokens)
- Params: 40B
- **Training Strategy:**
- Pretraining Stage
- Learnable Component: ViT + MLP
- Data: Trained on 8192x4800=39.3M samples, including COYO, LAION, CC12M, CC3M, SBU, Wukong, GRIT, Objects365, OpenImages, and OCR-related datasets.
- Note: In this stage, we load the pretrained weights of [InternViT-6B-448px-V1-2](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2). Moreover, in order to reduce the number of visual tokens, we use a pixel shuffle to reduce 1024 tokens to 256 tokens.
- Supervised Finetuning Stage
- Learnable Component: ViT + MLP + LLM
- Data: A simplified, fully open-source dataset, containing approximately 1.2 million samples.
## Released Models
| Model | Vision Foundation Model | Release Date |Note |
| :---------------------------------------------------------:|:--------------------------------------------------------------------------: |:----------------------:| :---------------------------------- |
| InternVL-Chat-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) |2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
| InternVL-Chat-V1-2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.21 | more SFT data and stronger |
| InternVL-Chat-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
| InternVL-Chat-V1-1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
## Performance
\* Proprietary Model
| name | image size | MMMU<br>(val) | MMMU<br>(test) | MathVista<br>(testmini) | MMB<br>(test) | MMB−CN<br>(test) | MMVP | MME | ScienceQA<br>(image) | POPE | TextVQA<br>(val) | SEEDv1<br>(image) | VizWiz<br>(test) | GQA<br>(test) |
| ------------------ | ---------- | ------------- | -------------- | ----------------------- | ------------- | ---------------- | ---- | -------- | -------------------- | ---- | ---------------- | ----------------- | ---------------- | ------------- |
| GPT−4V\* | unknown | 56.8 | 55.7 | 49.9 | 77.0 | 74.4 | 38.7 | 1409/517 | - | - | 78.0 | 71.6 | - | - |
| Gemini Ultra\* | unknown | 59.4 | - | 53.0 | - | - | - | - | - | - | 82.3 | - | - | - |
| Gemini Pro\* | unknown | 47.9 | - | 45.2 | 73.6 | 74.3 | 40.7 | 1497/437 | - | - | 74.6 | 70.7 | - | - |
| Qwen−VL−Plus\* | unknown | 45.2 | 40.8 | 43.3 | 67.0 | 70.7 | - | 1681/502 | - | - | 78.9 | 65.7 | - | - |
| Qwen−VL−Max\* | unknown | 51.4 | 46.8 | 51.0 | 77.6 | 75.7 | - | - | - | - | 79.5 | - | - | - |
| | | | | | | | | | | | | | | |
| LLaVA−NEXT−34B | 672x672 | 51.1 | 44.7 | 46.5 | 79.3 | 79.0 | - | 1631/397 | 81.8 | 87.7 | 69.5 | 75.9 | 63.8 | 67.1 |
| InternVL−Chat−V1-2 | 448x448 | 51.6 | 46.2 | 47.7 | 82.2 | 81.2 | 56.7 | 1687/489 | 83.3 | 88.0 | 72.5 | 75.6 | 60.0 | 64.0 |
- In most benchmarks, InternVL-Chat-V1-2 achieves better performance than LLaVA-NeXT-34B.
- Update (2024-04-21): We have fixed a bug in the evaluation code, and the TextVQA result has been corrected to 72.5.
## Training Details
### Data Preparation
Inspired by LLaVA-NeXT, we adopted a data-efficient SFT strategy to train InternVL-Chat-V1-2, utilizing approximately 1.2M of visual instruction tuning samples in total, all of which are fully open-source. In a macro sense, we build upon [ShareGPT-4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md#prepare-images) and additionally integrate [LLaVA-ZH](https://huggingface.co/datasets/openbmb/llava_zh), [DVQA](https://github.com/kushalkafle/DVQA_dataset), [ChartQA](https://github.com/vis-nlp/ChartQA), [AI2D](https://allenai.org/data/diagrams), [DocVQA](https://www.docvqa.org/datasets), [GeoQA+](https://github.com/SCNU203/GeoQA-Plus), and [SynthDoG-EN](https://huggingface.co/datasets/naver-clova-ix/synthdog-en). Most of the data remains consistent with LLaVA-NeXT.
For more details about data preparation, please see [here](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets).
### Training (Supervised Finetuning)
We provide [slurm scripts](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat/shell/hermes2_yi34b/internvl_chat_v1_2_hermes2_yi34b_448_finetune.sh) for multi-node multi-GPU training. You can use either 32 or 64 GPUs to train this model. If you use 64 GPUs, training will take approximately 18 hours.
For more details about training, please see [here](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#start-training).
The hyperparameters used for finetuning are listed in the following table.
| Hyperparameter | Trainable Param | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| ------------------ | ---------------- | ----------------- | ------------- | ------ | ---------- | ------------ |
| InternVL−Chat−V1-2 | 40B (full model) | 512 | 1e-5 | 1 | 2048 | 0.05 |
## Model Usage
We provide an example code to run InternVL-Chat-V1-2 using `transformers`.
You also can use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer
path = "OpenGVLab/InternVL-Chat-V1-2"
# If you have an 80G A100 GPU, you can put the entire model on a single GPU.
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
# Otherwise, you need to set device_map='auto' to use multiple GPUs for inference.
# model = AutoModel.from_pretrained(
# path,
# torch_dtype=torch.bfloat16,
# low_cpu_mem_usage=True,
# trust_remote_code=True,
# device_map='auto').eval()
tokenizer = AutoTokenizer.from_pretrained(path)
image = Image.open('./examples/image2.jpg').convert('RGB')
image = image.resize((448, 448))
image_processor = CLIPImageProcessor.from_pretrained(path)
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
generation_config = dict(
num_beams=1,
max_new_tokens=512,
do_sample=False,
)
# single-round conversation
question = "请详细描述图片"
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(question, response)
# multi-round conversation
question = "请详细描述图片"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(question, response)
question = "请根据图片写一首诗"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(question, response)
```
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2023internvl,
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2312.14238},
year={2023}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
```
## License
This project is released under the MIT license. Parts of this project contain code and models (e.g., LLaMA2) from other sources, which are subject to their respective licenses.
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Acknowledgement
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
## Contributors
Developed by: Zhe Chen, Weiyun Wang, Wenhai Wang, Erfei Cui, Zhangwei Gao, Xizhou Zhu, Lewei Lu, Tong Lu, Yu Qiao, Jifeng Dai |
pkarypis/gpt2-sft-dutch | pkarypis | 2024-03-30T21:23:39Z | 496 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:JackFram/llama-68m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-19T02:05:00Z | ---
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: gpt2-sft-dutch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-sft-dutch
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.075 | 1.0 | 9129 | 2.0464 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
ChristianAzinn/gist-embedding-v0-gguf | ChristianAzinn | 2024-04-07T22:16:33Z | 496 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"bert",
"gguf - feature-extraction - mteb - sentence-similarity - sentence-transformers",
"feature-extraction",
"en",
"arxiv:2402.16829",
"arxiv:2212.09741",
"base_model:avsolatorio/GIST-Embedding-v0",
"license:mit",
"autotrain_compatible",
"region:us"
]
| feature-extraction | 2024-04-07T20:14:18Z | ---
base_model: avsolatorio/GIST-Embedding-v0
inference: false
language:
- en
license: mit
model_creator: avsolatorio
model_name: GIST-Embedding-v0
model_type: bert
quantized_by: ChristianAzinn
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- bert
- gguf
- feature-extraction
- mteb
- sentence-similarity
- sentence-transformers
---
# gist-embedding-v0-gguf
Model creator: [avsolatorio](https://huggingface.co/avsolatorio)
Original model: [GIST-Embedding-v0](https://huggingface.co/avsolatorio/GIST-Embedding-v0)
## Original Description
*GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning*
The model is fine-tuned on top of the [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) using the [MEDI dataset](https://github.com/xlang-ai/instructor-embedding.git) augmented with mined triplets from the [MTEB Classification](https://huggingface.co/mteb) training dataset (excluding data from the Amazon Polarity Classification task).
The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions.
Technical paper: [GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning](https://arxiv.org/abs/2402.16829)
# Data
The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:
- Dataset: [avsolatorio/medi-data-mteb_avs_triplets](https://huggingface.co/datasets/avsolatorio/medi-data-mteb_avs_triplets)
- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb
The dataset contains a `task_type` key, which can be used to select only the mteb classification tasks (prefixed with `mteb_`).
The **MEDI Dataset** is published in the following paper: [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741).
The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.
The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.
## Description
This repo contains GGUF format files for the GIST-Embedding-v0 embedding model.
These files were converted and quantized with llama.cpp [PR 5500](https://github.com/ggerganov/llama.cpp/pull/5500), commit [34aa045de](https://github.com/ggerganov/llama.cpp/pull/5500/commits/34aa045de44271ff7ad42858c75739303b8dc6eb), on a consumer RTX 4090.
This model supports up to 512 tokens of context.
## Compatibility
These files are compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp) as of commit [4524290e8](https://github.com/ggerganov/llama.cpp/commit/4524290e87b8e107cc2b56e1251751546f4b9051), as well as [LM Studio](https://lmstudio.ai/) as of version 0.2.19.
# Meta-information
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
## Provided Files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| Name | Quant method | Bits | Size | Use case |
| [gist-embedding-v0.Q2_K.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q2_K.gguf) | Q2_K | 2 | 54.2 MB | smallest, significant quality loss - not recommended for most purposes |
| [gist-embedding-v0.Q3_K_S.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q3_K_S.gguf) | Q3_K_S | 3 | 58.5 MB | very small, high quality loss |
| [gist-embedding-v0.Q3_K_M.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q3_K_M.gguf) | Q3_K_M | 3 | 64.6 MB | very small, high quality loss |
| [gist-embedding-v0.Q3_K_L.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q3_K_L.gguf) | Q3_K_L | 3 | 69.5 MB | small, substantial quality loss |
| [gist-embedding-v0.Q4_0.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q4_0.gguf) | Q4_0 | 4 | 69.8 MB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gist-embedding-v0.Q4_K_S.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q4_K_S.gguf) | Q4_K_S | 4 | 71 MB | small, greater quality loss |
| [gist-embedding-v0.Q4_K_M.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q4_K_M.gguf) | Q4_K_M | 4 | 74.4 MB | medium, balanced quality - recommended |
| [gist-embedding-v0.Q5_0.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q5_0.gguf) | Q5_0 | 5 | 80.4 MB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gist-embedding-v0.Q5_K_S.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q5_K_S.gguf) | Q5_K_S | 5 | 80.4 MB | large, low quality loss - recommended |
| [gist-embedding-v0.Q5_K_M.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q5_K_M.gguf) | Q5_K_M | 5 | 82.8 MB | large, very low quality loss - recommended |
| [gist-embedding-v0.Q6_K.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q6_K.gguf) | Q6_K | 6 | 91.7 MB | very large, extremely low quality loss |
| [gist-embedding-v0.Q8_0.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0.Q8_0.gguf) | Q8_0 | 8 | 118 MB | very large, extremely low quality loss - recommended |
| [gist-embedding-v0.Q8_0.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0_fp16.gguf) | FP16 | 16 | 219 MB | enormous, pretty much the original model - not recommended |
| [gist-embedding-v0.Q8_0.gguf](https://huggingface.co/ChristianAzinn/gist-embedding-v0-gguf/blob/main/gist-embedding-v0_fp32.gguf) | FP32 | 32 | 436 MB | enormous, pretty much the original model - not recommended |
# Examples
## Example Usage with `llama.cpp`
To compute a single embedding, build llama.cpp and run:
```shell
./embedding -ngl 99 -m [filepath-to-gguf].gguf -p 'search_query: What is TSNE?'
```
You can also submit a batch of texts to embed, as long as the total number of tokens does not exceed the context length. Only the first three embeddings are shown by the `embedding` example.
`texts.txt`:
```
search_query: What is TSNE?
search_query: Who is Laurens Van der Maaten?
```
Compute multiple embeddings:
```shell
./embedding -ngl 99 -m [filepath-to-gguf].gguf -f texts.txt
```
## Example Usage with LM Studio
Download the 0.2.19 beta build from here: [Windows](https://releases.lmstudio.ai/windows/0.2.19/beta/LM-Studio-0.2.19-Setup-Preview-1.exe) [MacOS](https://releases.lmstudio.ai/mac/arm64/0.2.19/beta/LM-Studio-darwin-arm64-0.2.19-Preview-1.zip) [Linux](https://releases.lmstudio.ai/linux/0.2.19/beta/LM_Studio-0.2.19-Preview-1.AppImage)
Once installed, open the app. The home should look like this:

Search for either "ChristianAzinn" in the main search bar or go to the "Search" tab on the left menu and search the name there.

Select your model from those that appear (this example uses `bge-small-en-v1.5-gguf`) and select which quantization you want to download. Since this model is pretty small, I recommend Q8_0, if not f16/32. Generally, the lower you go in the list (or the bigger the number gets), the larger the file and the better the performance.

You will see a green checkmark and the word "Downloaded" once the model has successfully downloaded, which can take some time depending on your network speeds.

Once this model is finished downloading, navigate to the "Local Server" tab on the left menu and open the loader for text embedding models. This loader does not appear before version 0.2.19, so ensure you downloaded the correct version.

Select the model you just downloaded from the dropdown that appears to load it. You may need to play with configuratios in the right-side menu, such as GPU offload if it doesn't fit entirely into VRAM.

All that's left to do is to hit the "Start Server" button:

And if you see text like that shown below in the console, you're good to go! You can use this as a drop-in replacement for the OpenAI embeddings API in any application that requires it, or you can query the endpoint directly to test it out.

Example curl request to the API endpoint:
```shell
curl http://localhost:1234/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"input": "Your text string goes here",
"model": "model-identifier-here"
}'
```
For more information, see the LM Studio [text embedding documentation](https://lmstudio.ai/docs/text-embeddings).
## Acknowledgements
Thanks to the LM Studio team and everyone else working on open-source AI.
This README is inspired by that of [nomic-ai-embed-text-v1.5-gguf](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-gguf), another excellent embedding model, and those of the legendary [TheBloke](https://huggingface.co/TheBloke). |
TechxGenus/Meta-Llama-3-8B-GPTQ | TechxGenus | 2024-04-19T08:00:20Z | 496 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-04-19T07:25:24Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
GPTQ quantized version of Meta-Llama-3-8B model.
---
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
>>> import transformers
>>> import torch
>>> model_id = "meta-llama/Meta-Llama-3-8B"
>>> pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
>>> pipeline("Hey how are you doing today?")
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
mradermacher/L3-Arcania-4x8b-i1-GGUF | mradermacher | 2024-05-07T16:00:03Z | 496 | 8 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Steelskull/L3-Arcania-4x8b",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-04T06:12:16Z | ---
base_model: Steelskull/L3-Arcania-4x8b
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Steelskull/L3-Arcania-4x8b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-Arcania-4x8b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF/resolve/main/L3-Arcania-4x8b.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/vit_medium_patch16_rope_reg1_gap_256.sbb_in1k | timm | 2024-05-27T16:04:53Z | 496 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2303.11331",
"arxiv:2309.16588",
"arxiv:2010.11929",
"license:mit",
"region:us"
]
| image-classification | 2024-05-10T23:56:15Z | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for vit_medium_patch16_rope_reg1_gap_256.sbb_in1k
A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the architecture with rotary position embeddings (ROPE), registers, global average pooling.
There are a number of models in the lower end of model scales that originate in `timm`:
| variant | width | mlp width (mult) | heads | depth | timm orig |
| ------- | ----- | ---------------- | ----- | ----- | ---- |
| tiny | 192 | 768 (4) | 3 | 12 | n |
| wee | 256 | 1280 (5) | 4 | 14 | y |
| pwee | 256 | 1280 (5) | 4 | 16 (parallel) | y |
| small | 384 | 1536 (4) | 6 | 12 | n |
| little | 320 | 1792 (5.6) | 5 | 14 | y |
| medium | 512 | 2048 (4) | 8 | 12 | y |
| mediumd | 512 | 2048 (4) | 8 | 20 | y |
| betwixt | 640 | 2560 (4) | 10 | 12 | y |
| base | 768 | 3072 (4) | 12 | 12 | n |
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Searching for better baselines. Influced by Swin/DeiT/DeiT-III but w/ increased weight decay, moderate (in12k) to high (in1k) augmentation. Layer-decay used for fine-tune. Some runs used BCE and/or NAdamW instead of AdamW.
* See [train_hparams.yaml](./train_hparams.yaml) for specifics of each model.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 38.7
- GMACs: 9.8
- Activations (M): 14.3
- Image size: 256 x 256
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_medium_patch16_rope_reg1_gap_256.sbb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_medium_patch16_rope_reg1_gap_256.sbb_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 512, 16, 16])
# torch.Size([1, 512, 16, 16])
# torch.Size([1, 512, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_medium_patch16_rope_reg1_gap_256.sbb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 257, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
| model | top1 | top5 | param_count | img_size |
| -------------------------------------------------- | ------ | ------ | ----------- | -------- |
| [vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 86.202 | 97.874 | 64.11 | 256 |
| [vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 85.418 | 97.480 | 60.4 | 256 |
| [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 84.930 | 97.386 | 38.88 | 256 |
| [vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k) | 84.322 | 96.812 | 63.95 | 256 |
| [vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k) | 83.906 | 96.684 | 60.23 | 256 |
| [vit_base_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_base_patch16_rope_reg1_gap_256.sbb_in1k) | 83.866 | 96.67 | 86.43 | 256 |
| [vit_medium_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_rope_reg1_gap_256.sbb_in1k) | 83.81 | 96.824 | 38.74 | 256 |
| [vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k) | 83.774 | 96.972 | 22.52 | 256 |
| [vit_betwixt_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k) | 83.706 | 96.616 | 60.4 | 256 |
| [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 83.628 | 96.544 | 60.4 | 256 |
| [vit_medium_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in1k) | 83.47 | 96.622 | 38.88 | 256 |
| [vit_medium_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg1_gap_256.sbb_in1k) | 83.462 | 96.548 | 38.88 | 256 |
| [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 82.514 | 96.262 | 22.52 | 256 |
| [vit_wee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_wee_patch16_reg1_gap_256.sbb_in1k) | 80.258 | 95.360 | 13.42 | 256 |
| [vit_pwee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_pwee_patch16_reg1_gap_256.sbb_in1k) | 80.072 | 95.136 | 15.25 | 256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{darcet2023vision,
title={Vision Transformers Need Registers},
author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv preprint arXiv:2309.16588},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
|
bartowski/internlm2-math-plus-20b-GGUF | bartowski | 2024-05-27T19:41:13Z | 496 | 0 | null | [
"gguf",
"math",
"text-generation",
"en",
"zh",
"license:other",
"region:us"
]
| text-generation | 2024-05-27T18:47:59Z | ---
pipeline_tag: text-generation
license: other
language:
- en
- zh
tags:
- math
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of internlm2-math-plus-20b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3001">b3001</a> for quantization.
Original model: https://huggingface.co/internlm/internlm2-math-plus-20b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<s><|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [internlm2-math-plus-20b-Q8_0.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q8_0.gguf) | Q8_0 | 21.10GB | Extremely high quality, generally unneeded but max available quant. |
| [internlm2-math-plus-20b-Q6_K.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q6_K.gguf) | Q6_K | 16.29GB | Very high quality, near perfect, *recommended*. |
| [internlm2-math-plus-20b-Q5_K_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q5_K_M.gguf) | Q5_K_M | 14.07GB | High quality, *recommended*. |
| [internlm2-math-plus-20b-Q5_K_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q5_K_S.gguf) | Q5_K_S | 13.73GB | High quality, *recommended*. |
| [internlm2-math-plus-20b-Q4_K_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q4_K_M.gguf) | Q4_K_M | 11.98GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [internlm2-math-plus-20b-Q4_K_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q4_K_S.gguf) | Q4_K_S | 11.40GB | Slightly lower quality with more space savings, *recommended*. |
| [internlm2-math-plus-20b-IQ4_NL.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ4_NL.gguf) | IQ4_NL | 11.36GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [internlm2-math-plus-20b-IQ4_XS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ4_XS.gguf) | IQ4_XS | 10.76GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [internlm2-math-plus-20b-Q3_K_L.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q3_K_L.gguf) | Q3_K_L | 10.55GB | Lower quality but usable, good for low RAM availability. |
| [internlm2-math-plus-20b-Q3_K_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q3_K_M.gguf) | Q3_K_M | 9.72GB | Even lower quality. |
| [internlm2-math-plus-20b-IQ3_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ3_M.gguf) | IQ3_M | 9.12GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [internlm2-math-plus-20b-IQ3_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ3_S.gguf) | IQ3_S | 8.80GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [internlm2-math-plus-20b-Q3_K_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q3_K_S.gguf) | Q3_K_S | 8.76GB | Low quality, not recommended. |
| [internlm2-math-plus-20b-IQ3_XS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ3_XS.gguf) | IQ3_XS | 8.36GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [internlm2-math-plus-20b-IQ3_XXS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ3_XXS.gguf) | IQ3_XXS | 7.81GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [internlm2-math-plus-20b-Q2_K.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-Q2_K.gguf) | Q2_K | 7.54GB | Very low quality but surprisingly usable. |
| [internlm2-math-plus-20b-IQ2_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ2_M.gguf) | IQ2_M | 6.97GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [internlm2-math-plus-20b-IQ2_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ2_S.gguf) | IQ2_S | 6.47GB | Very low quality, uses SOTA techniques to be usable. |
| [internlm2-math-plus-20b-IQ2_XS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ2_XS.gguf) | IQ2_XS | 6.10GB | Very low quality, uses SOTA techniques to be usable. |
| [internlm2-math-plus-20b-IQ2_XXS.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ2_XXS.gguf) | IQ2_XXS | 5.54GB | Lower quality, uses SOTA techniques to be usable. |
| [internlm2-math-plus-20b-IQ1_M.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ1_M.gguf) | IQ1_M | 4.91GB | Extremely low quality, *not* recommended. |
| [internlm2-math-plus-20b-IQ1_S.gguf](https://huggingface.co/bartowski/internlm2-math-plus-20b-GGUF/blob/main/internlm2-math-plus-20b-IQ1_S.gguf) | IQ1_S | 4.54GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/internlm2-math-plus-20b-GGUF --include "internlm2-math-plus-20b-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/internlm2-math-plus-20b-GGUF --include "internlm2-math-plus-20b-Q8_0.gguf/*" --local-dir internlm2-math-plus-20b-Q8_0
```
You can either specify a new local-dir (internlm2-math-plus-20b-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
phunganhsang/PhoBert-base-v2-finetuned_70K | phunganhsang | 2024-06-01T10:45:52Z | 496 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-06-01T09:43:33Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: phobert_55k_bo_vn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert_55k_bo_vn
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2688
- Accuracy: 0.9534
- F1: 0.9536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.211 | 1.0 | 1719 | 0.1383 | 0.9485 | 0.9487 |
| 0.1376 | 2.0 | 3438 | 0.1373 | 0.9521 | 0.9523 |
| 0.1105 | 3.0 | 5157 | 0.1431 | 0.9522 | 0.9527 |
| 0.0889 | 4.0 | 6876 | 0.1649 | 0.9534 | 0.9537 |
| 0.0715 | 5.0 | 8595 | 0.1770 | 0.9549 | 0.9548 |
| 0.0561 | 6.0 | 10314 | 0.1937 | 0.9533 | 0.9536 |
| 0.0443 | 7.0 | 12033 | 0.2260 | 0.9530 | 0.9533 |
| 0.0367 | 8.0 | 13752 | 0.2664 | 0.9529 | 0.9531 |
| 0.0327 | 9.0 | 15471 | 0.2663 | 0.9532 | 0.9534 |
| 0.0298 | 10.0 | 17190 | 0.2688 | 0.9534 | 0.9536 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlecKarfonta/mistral-7b-v0.1-layla-v4-Q6_K-GGUF | AlecKarfonta | 2024-06-20T00:03:00Z | 496 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:l3utterfly/mistral-7b-v0.1-layla-v4",
"license:apache-2.0",
"model-index",
"region:us"
]
| null | 2024-06-20T00:02:32Z | ---
base_model: l3utterfly/mistral-7b-v0.1-layla-v4
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: mistral-7b-v0.1-layla-v4
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=l3utterfly/mistral-7b-v0.1-layla-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=l3utterfly/mistral-7b-v0.1-layla-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=l3utterfly/mistral-7b-v0.1-layla-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.14
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=l3utterfly/mistral-7b-v0.1-layla-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=l3utterfly/mistral-7b-v0.1-layla-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=l3utterfly/mistral-7b-v0.1-layla-v4
name: Open LLM Leaderboard
---
# AlecKarfonta/mistral-7b-v0.1-layla-v4-Q6_K-GGUF
This model was converted to GGUF format from [`l3utterfly/mistral-7b-v0.1-layla-v4`](https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo AlecKarfonta/mistral-7b-v0.1-layla-v4-Q6_K-GGUF --hf-file mistral-7b-v0.1-layla-v4-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo AlecKarfonta/mistral-7b-v0.1-layla-v4-Q6_K-GGUF --hf-file mistral-7b-v0.1-layla-v4-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo AlecKarfonta/mistral-7b-v0.1-layla-v4-Q6_K-GGUF --hf-file mistral-7b-v0.1-layla-v4-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo AlecKarfonta/mistral-7b-v0.1-layla-v4-Q6_K-GGUF --hf-file mistral-7b-v0.1-layla-v4-q6_k.gguf -c 2048
```
|
aubmindlab/aragpt2-large | aubmindlab | 2024-05-29T09:19:07Z | 495 | 6 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"aragpt2",
"text-generation",
"custom_code",
"ar",
"dataset:wikipedia",
"dataset:Osian",
"dataset:arabic-billion-words",
"dataset:oscar",
"dataset:Assafir-private",
"arxiv:2012.15520",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: ar
license: other
license_name: custom
license_link: https://github.com/aub-mind/arabert/blob/master/aragpt2/LICENSE
datasets:
- wikipedia
- Osian
- arabic-billion-words
- oscar
- Assafir-private
inference: false
widget:
- text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال"
- text: "القدس مدينة تاريخية، بناها الكنعانيون في"
- text: "كان يا ما كان في قديم الزمان"
---
# Arabic GPT2
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/>
You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520)
The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API.
GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository.
These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library.
GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`).
Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core.
AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2.
# NOTE: The model expects the input to be preprocessed using the `arabert` library.
if not the model won't be able to generate the correct output.
## Testing the model using `transformers`:
The model code is now hosted on HuggingFace so you need to use the `trust_remote_code` flag, and can be used as follows:
```python
from transformers import AutoModelForCausalLM, pipeline
from arabert.preprocess import ArabertPreprocessor
MODEL_NAME='aubmindlab/aragpt2-large'
arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME)
text=""
text_clean = arabert_prep.preprocess(text)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, trust_remote_code=True)
tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline(
"text-generation", model=MODEL_NAME, trust_remote_code=True
)
#feel free to try different decoding settings
generation_pipeline(text,
pad_token_id=pipeline.tokenizer.eos_token_id,
num_beams=10,
max_length=200,
top_p=0.9,
repetition_penalty = 3.0,
no_repeat_ngram_size = 3)[0]['generated_text']
>>>
```
## Finetunning using `transformers`:
Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed)
## Finetuning using our code with TF 1.15.4:
Create the Training TFRecords:
```bash
python create_pretraining_data.py
--input_file=<RAW TEXT FILE with documents/article separated by an empty line>
--output_file=<OUTPUT TFRecord>
--tokenizer_dir=<Directory with the GPT2 Tokenizer files>
```
Finetuning:
```bash
python3 run_pretraining.py \
--input_file="gs://<GS_BUCKET>/pretraining_data/*" \
--output_dir="gs://<GS_BUCKET>/pretraining_model/" \
--config_file="config/small_hparams.json" \
--batch_size=128 \
--eval_batch_size=8 \
--num_train_steps= \
--num_warmup_steps= \
--learning_rate= \
--save_checkpoints_steps= \
--max_seq_length=1024 \
--max_eval_steps= \
--optimizer="lamb" \
--iterations_per_loop=5000 \
--keep_checkpoint_max=10 \
--use_tpu=True \
--tpu_name=<TPU NAME> \
--do_train=True \
--do_eval=False
```
# Model Sizes
Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M |
AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 |1.38G/370M |
AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M |
AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Compute
For Dataset Source see the [Dataset Section](#Dataset)
Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5
AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5
AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3
AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9
# Dataset
The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Disclaimer
The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it.
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-aragpt2,
title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.21",
pages = "196--207",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
studio-ousia/luke-japanese-base-lite | studio-ousia | 2022-11-09T06:22:22Z | 495 | 6 | transformers | [
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"entity typing",
"relation classification",
"question answering",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-25T09:27:16Z | ---
language: ja
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- entity typing
- relation classification
- question answering
license: apache-2.0
---
## luke-japanese
**luke-japanese** is the Japanese version of **LUKE** (**L**anguage
**U**nderstanding with **K**nowledge-based **E**mbeddings), a pre-trained
_knowledge-enhanced_ contextualized representation of words and entities. LUKE
treats words and entities in a given text as independent tokens, and outputs
contextualized representations of them. Please refer to our
[GitHub repository](https://github.com/studio-ousia/luke) for more details and
updates.
This model is a lightweight version which does not contain Wikipedia entity
embeddings. Please use the
[full version](https://huggingface.co/studio-ousia/luke-japanese-base/) for
tasks that use Wikipedia entities as inputs.
**luke-japanese**は、単語とエンティティの知識拡張型訓練済み Transformer モデル**LUKE**の日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。詳細については、[GitHub リポジトリ](https://github.com/studio-ousia/luke)を参照してください。
このモデルは、Wikipedia エンティティのエンベディングを含まない軽量版のモデルです。Wikipedia エンティティを入力として使うタスクには、[full version](https://huggingface.co/studio-ousia/luke-japanese-base/)を使用してください。
### Experimental results on JGLUE
The experimental results evaluated on the dev set of
[JGLUE](https://github.com/yahoojapan/JGLUE) are shown as follows:
| Model | MARC-ja | JSTS | JNLI | JCommonsenseQA |
| ---------------------- | --------- | ------------------- | --------- | -------------- |
| | acc | Pearson/Spearman | acc | acc |
| **LUKE Japanese base** | **0.965** | **0.916**/**0.877** | **0.912** | **0.842** |
| _Baselines:_ | |
| Tohoku BERT base | 0.958 | 0.909/0.868 | 0.899 | 0.808 |
| NICT BERT base | 0.958 | 0.910/0.871 | 0.902 | 0.823 |
| Waseda RoBERTa base | 0.962 | 0.913/0.873 | 0.895 | 0.840 |
| XLM RoBERTa base | 0.961 | 0.877/0.831 | 0.893 | 0.687 |
The baseline scores are obtained from
[here](https://github.com/yahoojapan/JGLUE/blob/a6832af23895d6faec8ecf39ec925f1a91601d62/README.md).
### Citation
```latex
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
|
stablediffusionapi/majicmixrealistic | stablediffusionapi | 2023-05-23T10:13:15Z | 495 | 11 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-05-23T06:06:03Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# majicMIX_realistic API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "majicmixrealistic"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/majicmixrealistic)
Credits: [View credits](https://civitai.com/?query=majicMIX_realistic)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "majicmixrealistic",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
facebook/mms-lid-2048 | facebook | 2023-06-13T10:16:47Z | 495 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"audio-classification",
"mms",
"ab",
"af",
"ak",
"am",
"ar",
"as",
"av",
"ay",
"az",
"ba",
"bm",
"be",
"bn",
"bi",
"bo",
"sh",
"br",
"bg",
"ca",
"cs",
"ce",
"cv",
"ku",
"cy",
"da",
"de",
"dv",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fa",
"fj",
"fi",
"fr",
"fy",
"ff",
"ga",
"gl",
"gn",
"gu",
"zh",
"ht",
"ha",
"he",
"hi",
"hu",
"hy",
"ig",
"ia",
"ms",
"is",
"it",
"jv",
"ja",
"kn",
"ka",
"kk",
"kr",
"km",
"ki",
"rw",
"ky",
"ko",
"kv",
"lo",
"la",
"lv",
"ln",
"lt",
"lb",
"lg",
"mh",
"ml",
"mr",
"mk",
"mg",
"mt",
"mn",
"mi",
"my",
"nl",
"no",
"ne",
"ny",
"oc",
"om",
"or",
"os",
"pa",
"pl",
"pt",
"ps",
"qu",
"ro",
"rn",
"ru",
"sg",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"es",
"sq",
"su",
"sv",
"sw",
"ta",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"ts",
"tr",
"uk",
"vi",
"wo",
"xh",
"yo",
"zu",
"za",
"dataset:google/fleurs",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-06-13T08:59:21Z | ---
tags:
- mms
language:
- ab
- af
- ak
- am
- ar
- as
- av
- ay
- az
- ba
- bm
- be
- bn
- bi
- bo
- sh
- br
- bg
- ca
- cs
- ce
- cv
- ku
- cy
- da
- de
- dv
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fa
- fj
- fi
- fr
- fy
- ff
- ga
- gl
- gn
- gu
- zh
- ht
- ha
- he
- hi
- sh
- hu
- hy
- ig
- ia
- ms
- is
- it
- jv
- ja
- kn
- ka
- kk
- kr
- km
- ki
- rw
- ky
- ko
- kv
- lo
- la
- lv
- ln
- lt
- lb
- lg
- mh
- ml
- mr
- ms
- mk
- mg
- mt
- mn
- mi
- my
- zh
- nl
- 'no'
- 'no'
- ne
- ny
- oc
- om
- or
- os
- pa
- pl
- pt
- ms
- ps
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- ro
- rn
- ru
- sg
- sk
- sl
- sm
- sn
- sd
- so
- es
- sq
- su
- sv
- sw
- ta
- tt
- te
- tg
- tl
- th
- ti
- ts
- tr
- uk
- ms
- vi
- wo
- xh
- ms
- yo
- ms
- zu
- za
license: cc-by-nc-4.0
datasets:
- google/fleurs
metrics:
- acc
---
# Massively Multilingual Speech (MMS) - Finetuned LID
This checkpoint is a model fine-tuned for speech language identification (LID) and part of Facebook's [Massive Multilingual Speech project](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/).
This checkpoint is based on the [Wav2Vec2 architecture](https://huggingface.co/docs/transformers/model_doc/wav2vec2) and classifies raw audio input to a probability distribution over 2048 output classes (each class representing a language).
The checkpoint consists of **1 billion parameters** and has been fine-tuned from [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) on 2048 languages.
## Table Of Content
- [Example](#example)
- [Supported Languages](#supported-languages)
- [Model details](#model-details)
- [Additional links](#additional-links)
## Example
This MMS checkpoint can be used with [Transformers](https://github.com/huggingface/transformers) to identify
the spoken language of an audio. It can recognize the [following 2048 languages](#supported-languages).
Let's look at a simple example.
First, we install transformers and some other libraries
```
pip install torch accelerate torchaudio datasets
pip install --upgrade transformers
````
**Note**: In order to use MMS you need to have at least `transformers >= 4.30` installed. If the `4.30` version
is not yet available [on PyPI](https://pypi.org/project/transformers/) make sure to install `transformers` from
source:
```
pip install git+https://github.com/huggingface/transformers.git
```
Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz.
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
```
Next, we load the model and processor
```py
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-2048"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
```
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition)
```py
# English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'eng'
# Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'ara'
```
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
```py
processor.id2label.values()
```
For more details, about the architecture please have a look at [the official docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
## Supported Languages
This model supports 2048 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3).
You can find more details about the languages and their ISO 649-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
<details>
<summary>Click to toggle</summary>
- ara
- cmn
- eng
- spa
- fra
- mlg
- swe
- ful
- por
- vie
- sun
- zlm
- ben
- kor
- tuk
- hin
- asm
- ind
- urd
- swh
- aze
- hau
- som
- mon
- tel
- bod
- rus
- tat
- tgl
- slv
- tur
- mar
- heb
- tha
- ron
- yor
- bel
- mal
- cat
- amh
- bul
- hat
- mkd
- pol
- nld
- hun
- tam
- hrv
- fas
- afr
- nya
- cym
- isl
- orm
- kmr
- lin
- jav
- snd
- nob
- uzb
- bos
- deu
- lit
- mya
- lat
- grn
- kaz
- npi
- kik
- ell
- sqi
- yue
- cak
- hye
- kat
- kan
- jpn
- pan
- lav
- guj
- ces
- tgk
- khm
- bak
- ukr
- che
- fao
- mam
- xog
- glg
- ltz
- quc
- aka
- lao
- crh
- sna
- mlt
- poh
- sin
- cfm
- ixl
- aiw
- mri
- tuv
- gag
- pus
- ita
- srp
- lug
- eus
- nno
- nhx
- gur
- ory
- luo
- sxn
- xsm
- cmo
- kbp
- slk
- ewe
- dtp
- fin
- acr
- quy
- saq
- quh
- rif
- bre
- bqc
- tzj
- mos
- bwq
- yao
- cac
- xon
- new
- yid
- hne
- dan
- hus
- dyu
- uig
- pse
- bam
- bus
- ttq
- ngl
- est
- tso
- gng
- seh
- wlx
- sck
- rjs
- ntm
- lok
- tcc
- mup
- dga
- lis
- kru
- cnh
- bxk
- mnk
- amf
- guh
- rmc
- rel
- zne
- teo
- mzi
- tpi
- ycl
- xsr
- ddn
- thl
- wal
- ctg
- onb
- gbo
- vmw
- beh
- mip
- lnd
- khg
- bfz
- ifa
- rol
- nzi
- ceb
- kml
- sxb
- nym
- acn
- bfo
- mhy
- adx
- mqj
- bbc
- pmf
- dsh
- bfy
- sid
- bno
- bfa
- pxm
- sda
- oku
- mbu
- qxl
- ndv
- nmz
- tzh
- box
- iri
- nxq
- ayr
- bgq
- bbo
- gof
- bmq
- kdt
- cla
- asa
- lew
- war
- kfx
- zpu
- xal
- fon
- maj
- mag
- kle
- hlb
- any
- poe
- pil
- rej
- lbw
- bdu
- dgi
- mgo
- mkl
- mco
- maa
- btd
- kcg
- tng
- pls
- kdl
- tzo
- pap
- lns
- kyb
- ksb
- akp
- zar
- gil
- blt
- ctd
- mhx
- gud
- hnn
- kek
- mxt
- frd
- krc
- suz
- ava
- mcp
- hyw
- hlt
- dnw
- udm
- xed
- kpv
- bkd
- xnj
- atb
- cwe
- nog
- kij
- mqn
- btx
- ife
- bgw
- trs
- kjh
- chv
- ati
- ybb
- did
- gau
- dnj
- kbo
- cle
- crs
- nhy
- yba
- zpz
- yka
- dgk
- mgd
- lon
- cab
- muy
- taq
- tlj
- sne
- smo
- nsu
- nin
- cnl
- btt
- tly
- mge
- prk
- ium
- zpt
- eka
- mfk
- akb
- mxb
- cso
- kak
- yre
- obo
- tgj
- abi
- yas
- men
- nga
- blh
- kdc
- cmr
- bom
- zpg
- yea
- ubl
- hwc
- xtm
- mhr
- avn
- log
- xsb
- kri
- idd
- mnw
- plw
- nuj
- ted
- sbp
- knb
- kwf
- rkt
- mib
- miy
- lsi
- zaj
- mih
- myv
- luc
- tob
- mpm
- kne
- asg
- pps
- flr
- trn
- xmm
- poi
- qxr
- zmz
- kqe
- sjm
- kmd
- mim
- knj
- gqr
- suc
- med
- tbl
- mto
- kzf
- bdh
- zpc
- hoc
- krs
- snp
- wsg
- zaq
- gwr
- yaz
- cgc
- azg
- sil
- mil
- kir
- dav
- xtd
- pis
- qvh
- mai
- prt
- tlb
- kin
- ami
- cok
- san
- lif
- atq
- iba
- knk
- rub
- zga
- jun
- yal
- run
- tye
- ngu
- nij
- pkb
- gux
- dig
- gog
- gbm
- nhe
- hnj
- ubu
- nyy
- tir
- kdj
- awa
- bcc
- sus
- nan
- kno
- nyn
- nyf
- dnt
- grt
- mdy
- hak
- ses
- suk
- bem
- keo
- guk
- lam
- kue
- khq
- kus
- lsm
- bwu
- dug
- sbd
- kdh
- sah
- mur
- shn
- spy
- cko
- aha
- mfz
- rmy
- nim
- gjn
- kde
- bsq
- spp
- kqn
- zyb
- oci
- nnw
- cly
- rim
- oss
- bru
- dag
- ade
- gum
- law
- tem
- kaa
- raw
- kff
- lhu
- taj
- dyo
- hui
- kbr
- mpg
- guc
- niy
- nus
- mzj
- tbz
- bib
- quz
- mev
- ptu
- lef
- mfi
- bky
- mdm
- mgh
- bim
- mnb
- fij
- maw
- dip
- qul
- bgc
- mxv
- thf
- bud
- dzo
- lom
- ztq
- mfq
- ach
- las
- nia
- tbt
- dgo
- zab
- dik
- pbb
- kac
- dop
- pcm
- shk
- xnr
- zpo
- ktb
- bba
- sba
- myb
- quw
- emp
- ctu
- gbk
- guw
- nst
- cnt
- ilo
- cme
- srx
- qvm
- mhi
- mzw
- zao
- set
- csk
- wol
- nnb
- zas
- zaw
- mgq
- yam
- sig
- kam
- biv
- laj
- otq
- pce
- mwv
- mak
- kfb
- alz
- dwr
- hif
- kao
- mor
- lme
- nav
- lob
- cax
- cdj
- knf
- mad
- kfy
- alt
- tgw
- wwa
- ljp
- myk
- sag
- kbq
- jiv
- mxq
- ahk
- kab
- mie
- car
- nfr
- mfe
- cni
- led
- mbb
- twu
- nag
- cya
- kum
- tsz
- cco
- mnf
- nhu
- mzm
- trq
- ken
- ker
- bpr
- cou
- kyq
- xpe
- zpl
- enb
- zad
- bcl
- bex
- sas
- ruf
- srn
- gor
- tik
- xtn
- gmv
- kez
- kss
- old
- nod
- kxm
- lia
- izr
- ozm
- bfd
- acf
- thk
- mah
- sgw
- daa
- ifb
- jmc
- nyo
- myx
- zai
- nhw
- ncu
- nhi
- adj
- wba
- lgg
- irk
- tca
- mjl
- ote
- kpz
- bdq
- jam
- agr
- zpi
- sml
- mvp
- kxc
- bsc
- hay
- dyi
- ilb
- itv
- hil
- bkv
- poy
- cuk
- miz
- kdi
- zpm
- adh
- npl
- mrw
- lee
- bss
- pam
- aaz
- kqy
- key
- cpa
- kkj
- tap
- sbl
- qvw
- yua
- ziw
- xrb
- mcu
- sur
- heh
- lwo
- gej
- ace
- zos
- agd
- bci
- cce
- toc
- mbt
- shi
- tll
- kjb
- toi
- pbi
- ann
- krl
- vmy
- bst
- gkn
- nwb
- pag
- jbu
- klu
- gso
- kyu
- mio
- ngp
- zaa
- eza
- omi
- izz
- loq
- pww
- miq
- min
- cuc
- bav
- bzj
- jac
- gbi
- pko
- dts
- gxx
- haw
- ood
- qxh
- bts
- crn
- krj
- umb
- sgj
- zty
- kki
- qwh
- kub
- ndj
- hns
- chz
- ksp
- qvn
- gde
- mfy
- bjv
- rng
- mif
- wmw
- ndp
- mir
- bps
- jnj
- ifu
- iqw
- djk
- gvl
- kdn
- mzk
- toh
- qxn
- nnq
- rmo
- ncj
- nyu
- mrj
- wob
- ifk
- mog
- hig
- maz
- ban
- srm
- mas
- mda
- nse
- gym
- hno
- bgd
- tac
- bxg
- qvs
- nch
- ibg
- mey
- zae
- neb
- ldi
- qvz
- zca
- jvn
- kwi
- ndz
- mza
- qve
- qvc
- caa
- wbi
- alw
- azz
- tos
- qxo
- ibo
- mkw
- avu
- otn
- stb
- kby
- xho
- bcq
- pae
- lnl
- guz
- ksw
- syl
- tyv
- zul
- lai
- mww
- loz
- beq
- mer
- arn
- bza
- lun
- lbj
- bto
- mnh
- pov
- nbw
- ckb
- epo
- sfw
- knc
- tzm
- top
- lus
- ige
- tum
- gvr
- csh
- xdy
- bho
- abk
- ijc
- nso
- vai
- neq
- gkp
- dje
- bev
- jen
- lub
- ndc
- lrc
- qug
- bax
- bum
- srr
- tiv
- sea
- maf
- pci
- xkl
- rhg
- bft
- ngc
- lua
- kck
- awn
- lag
- ada
- soe
- swk
- mni
- pdt
- ebu
- bwr
- etu
- krw
- gaa
- mkn
- gle
- mug
- kqs
- ida
- kvj
- trc
- zza
- nzb
- mcn
- lol
- lic
- zpq
- skr
- rml
- ggu
- hdy
- ktu
- mgw
- lmp
- mfa
- ijn
- mwm
- vmk
- mua
- ngb
- dur
- nup
- tsc
- bkm
- kpm
- idu
- ksf
- kea
- urh
- mro
- ego
- gya
- kfc
- nnc
- mrt
- ndi
- ogo
- tui
- bhi
- bzw
- elm
- okr
- its
- adi
- kng
- mhw
- mgr
- ast
- igb
- kfi
- dzg
- mzl
- ncl
- kmb
- sat
- unr
- bhb
- glk
- iso
- sef
- bin
- sgc
- coh
- dua
- giz
- tod
- dks
- kaj
- wlo
- ady
- emk
- suj
- lzz
- snf
- tvs
- jra
- zav
- bbj
- mhu
- kel
- njz
- tuy
- efi
- lgm
- lue
- tke
- igl
- nde
- tsn
- gom
- nyd
- trp
- kjl
- haq
- byv
- ven
- fan
- ble
- jmx
- byd
- toq
- bvu
- sdr
- wes
- her
- swb
- bcp
- dde
- haj
- ktz
- qxu
- rmn
- sou
- sot
- rag
- glv
- bjg
- mve
- kha
- mjt
- jmd
- mwn
- wof
- oki
- nnh
- kjc
- sep
- gno
- mix
- trd
- sco
- evn
- brv
- kjg
- tkr
- mfv
- div
- rki
- fmu
- eyo
- aoz
- mhs
- hvn
- chf
- mym
- lbx
- mjx
- mtd
- lrm
- hni
- pmy
- lbm
- akh
- rgs
- lwg
- nuz
- khw
- the
- pof
- wci
- tpe
- bqi
- bjn
- ccp
- cto
- abt
- nos
- tog
- llc
- zac
- tet
- kuj
- tab
- tcz
- zin
- ajg
- bkx
- imo
- iru
- knx
- knu
- nyk
- ymm
- xmc
- bgz
- ina
- mau
- cnk
- loe
- ztg
- esg
- thq
- snk
- nza
- srb
- blo
- otd
- pht
- blr
- scg
- zam
- lla
- xta
- ssy
- rah
- pbo
- ctp
- kpo
- pnb
- mki
- zpv
- bha
- maq
- tth
- eto
- atd
- bhw
- gwn
- phr
- mxx
- mui
- sdq
- xsq
- tkt
- tsj
- uki
- mgp
- mvv
- enq
- bxr
- qxp
- tdt
- olu
- bji
- ton
- knl
- pdu
- pwo
- kei
- zgb
- bug
- sie
- gah
- jml
- kmw
- mrr
- oyb
- ria
- shr
- vah
- djo
- krn
- khb
- tpx
- kas
- hii
- bun
- jab
- hmd
- dhw
- lir
- dhn
- ssw
- iii
- kca
- peg
- agx
- kib
- bap
- brx
- bmb
- nbe
- dar
- anu
- kmc
- ksd
- lep
- zyn
- rwr
- pcc
- hmt
- kxv
- dta
- sdo
- hea
- aso
- lri
- cdm
- mji
- dib
- ewo
- yom
- cch
- kfq
- bzf
- shj
- yiz
- kai
- afe
- ish
- wbr
- kgp
- mrd
- thr
- pmi
- sip
- xtl
- ekg
- ygr
- kwv
- bas
- kfk
- njb
- zzj
- rab
- lot
- bzy
- stt
- afu
- dhd
- mjc
- gol
- twh
- bfb
- tdf
- wbm
- blk
- kge
- swv
- cua
- tpu
- bwx
- kjp
- mgm
- wtm
- xuj
- nbu
- tjg
- les
- gju
- kwl
- cgk
- zpj
- ysn
- haz
- niq
- yig
- sfm
- mtr
- ttr
- wlv
- mfc
- dwz
- sya
- uth
- tes
- lar
- aii
- bde
- say
- hmo
- meu
- shy
- mde
- mke
- tic
- dao
- ywq
- grv
- gjk
- ztp
- mks
- mbz
- tsg
- dob
- lpo
- qud
- gdb
- kbd
- mrg
- xub
- kun
- slr
- ica
- sjp
- tld
- mql
- sif
- uss
- nmf
- soa
- kbl
- bns
- byn
- mdd
- mdr
- tcy
- cnb
- xtc
- tar
- tan
- lbe
- aks
- mjg
- puu
- noe
- kft
- grj
- ruk
- bcs
- msi
- tcu
- sly
- hmr
- lnu
- mlm
- brh
- nbl
- ott
- wbl
- lax
- ort
- hms
- zpa
- juk
- nku
- bge
- rog
- anr
- poc
- prp
- wuu
- gry
- kex
- hsn
- zlj
- kfp
- bca
- aar
- brt
- khr
- swi
- nto
- xkf
- pwr
- tyz
- kua
- bgp
- xwe
- gec
- bli
- lhi
- bww
- hia
- mxy
- msm
- tdd
- roh
- ahr
- lro
- jer
- der
- mng
- apt
- jib
- cta
- zom
- keu
- tyr
- ebo
- anm
- bda
- zyj
- ssb
- bra
- lea
- chq
- nbm
- kad
- ysp
- abs
- esk
- nhp
- bhd
- sce
- bbk
- xkb
- lch
- mdv
- sss
- kvx
- dai
- jio
- hmg
- okv
- zyg
- lmn
- diu
- tcf
- dub
- lkt
- tuz
- kxp
- sgh
- tts
- qvi
- pmj
- duh
- xwl
- lkr
- kif
- koi
- bkr
- zak
- hre
- hmj
- nbr
- vav
- tvd
- yes
- nbc
- ncq
- vas
- bkc
- xbr
- bdv
- lbo
- dcc
- sbx
- ssi
- bqv
- ctl
- scl
- skn
- lez
- tkb
- bdi
- dbm
- buu
- bfr
- yiq
- bew
- cqd
- wew
- bfm
- luj
- mkz
- kgj
- dso
- mse
- doz
- gru
- ich
- mig
- anp
- ayb
- cjk
- wti
- kga
- noi
- ndr
- ldb
- ymk
- gwd
- ktv
- arg
- bjj
- nqg
- fie
- tis
- pca
- bwo
- zdj
- qxs
- bef
- mqu
- nzy
- drg
- kmy
- wja
- arh
- drs
- pll
- jeh
- kwc
- bol
- cdh
- yeu
- tig
- muo
- byc
- nnp
- xty
- kwn
- dio
- gby
- ibb
- mjs
- pua
- sme
- gdf
- otx
- ekr
- aoe
- res
- brf
- vmz
- sbn
- brb
- vmc
- nut
- gas
- mfn
- ywl
- plc
- thz
- mfd
- adl
- bej
- sen
- mgb
- liq
- tpl
- tek
- rin
- chw
- cjm
- mjw
- rnd
- kix
- bsp
- ynq
- ldm
- sym
- amu
- stj
- yrk
- cyo
- isi
- naq
- bau
- bsh
- pbm
- crw
- nja
- dgh
- bdl
- ags
- int
- bpn
- tvu
- mxp
- bsf
- mxs
- twx
- itd
- gel
- hmz
- nma
- pck
- sng
- nlv
- fvr
- blf
- khy
- kfr
- tku
- mgc
- ciw
- rue
- lky
- zln
- tlp
- zkd
- ukw
- tdg
- bhq
- pym
- mlq
- snm
- wni
- mdt
- wlc
- jum
- cde
- kvr
- mus
- tmn
- pmx
- mlf
- btg
- rar
- nri
- osi
- jax
- dsq
- hoj
- pch
- jit
- for
- kgo
- tji
- zpx
- bpy
- wle
- wyy
- cdo
- nbh
- isd
- nhn
- sjo
- kvq
- vmx
- jad
- cdr
- ijj
- bgn
- bcy
- bhh
- qvj
- nix
- xkv
- slp
- kza
- bmi
- rbb
- mck
- rmt
- dox
- kal
- bri
- ets
- ccl
- djm
- nak
- png
- bgs
- pha
- cpx
- nih
- how
- nxd
- hbb
- ior
- mmd
- hrm
- bze
- cov
- bfs
- bfq
- mdj
- mmz
- tkd
- wow
- czt
- iry
- nyi
- ogc
- tvn
- mzb
- gdl
- cdi
- ktp
- khc
- wbq
- atu
- rir
- mls
- anc
- mmc
- bnx
- goa
- bet
- mfb
- zmb
- btm
- hml
- ikw
- zoc
- afo
- mxa
- mvz
- ccg
- rad
- xom
- ngi
- aug
- skt
- ibl
- pem
- byo
- nka
- akw
- jya
- agc
- njo
- mxl
- hwo
- ged
- aal
- gro
- mdu
- vkl
- mrh
- swj
- bip
- kfh
- mbi
- nbi
- gra
- zpn
- jog
- pnz
- nxg
- sse
- njm
- rkm
- bjt
- mgg
- cbk
- prx
- bil
- mkf
- nba
- ddg
- pow
- abr
- ver
- caq
- mgi
- trf
- sed
- cvn
- nbv
- hnd
- liw
- max
- sad
- hav
- ntk
- kxx
- klg
- bhp
- dri
- kny
- bag
- zts
- pwn
- yer
- daq
- kfo
- org
- gvf
- xkg
- yif
- tfi
- chr
- bje
- sez
- zag
- kfa
- mut
- mta
- cld
- kjs
- buo
- opa
- hac
- mqg
- gmz
- glw
- mqx
- wgi
- czh
- diw
- bdm
- bbu
- ahg
- sop
- gqa
- nmc
- nap
- ndo
- gcf
- gbr
- she
- bxb
- kqo
- yun
- mfm
- ryu
- kfm
- bvm
- gow
- jgk
- odk
- syb
- ggg
- yix
- sbk
- slx
- iyx
- vmm
- mbd
- sxw
- gew
- xmg
- tru
- lse
- tay
- wji
- jns
- kyk
- mfo
- kdq
- kfz
- aqg
- iti
- wem
- ghl
- uuu
- itt
- zaf
- mqh
- xti
- ots
- dtm
- yaf
- tsw
- mtu
- gdx
- smy
- nzm
- anw
- adz
- ank
- tuq
- otm
- kip
- hch
- src
- xnz
- sti
- ebr
- wss
- sct
- vmp
- sdh
- vls
- rwk
- dbd
- meh
- kmk
- tma
- bux
- bvi
- ala
- ahs
- mhk
- gid
- yns
- kzc
- mku
- whg
- akl
- bqx
- iko
- krh
- bcz
- dkx
- zpr
- mii
- yim
- mne
- tny
- saz
- zrg
- gab
- ttj
- ckl
- dak
- pdc
- ogb
- bni
- rcf
- nhg
- ike
- snq
- bja
- kot
- kqk
- orx
- fay
- tiy
- pmm
- epi
- hol
- bif
- ilp
- pbv
- trv
- lrl
- nph
- sgd
- scn
- mtb
- tou
- bez
- cgg
- yax
- hgm
- cte
- akf
- mdn
- bzx
- pcl
- sgr
- mdh
- wbj
- ctz
- nsa
- buf
- lna
- gcr
- njh
- shc
- iby
- toj
- pac
- ifm
- gul
- xmf
- sev
- cos
- ngz
- nyw
- plv
- ity
- qus
- zpy
- mkb
- mye
- nre
- bsy
- ksv
- ekp
- agb
- dis
- kjt
- bou
- mwe
- lki
- luz
- nlj
- kkh
- aba
- mbf
- pfe
- ijs
- abu
- tsa
- nyj
- pos
- nkw
- brl
- kmz
- lik
- stv
- knn
- tkq
- yog
- mtq
- tdc
- bgi
- yhd
- ema
- daw
- mnp
- chk
- zmq
- aee
- zoh
- lum
- nds
- bnn
- soz
- oyd
- tul
- gla
- bjo
- bar
- unx
- bks
- moy
- axk
- mzn
- mbs
- puo
- lal
- plk
- ral
- zmp
- jaf
- ivv
- ndh
- oks
- mzv
- lad
- mdw
- cja
- diz
- psi
- bgx
- pon
- sro
- gad
- blm
- kfu
- zpw
- etx
- end
- sby
- msk
- nkh
- gsw
- chj
- mbo
- jge
- vmj
- tft
- cma
- zpe
- zpd
- har
- fry
- gbv
- clu
- bta
- wbk
- nzk
- psh
- zat
- ngj
- agi
- suq
- djc
</details>
## Model details
- **Developed by:** Vineel Pratap et al.
- **Model type:** Multi-Lingual Automatic Speech Recognition model
- **Language(s):** 2048 languages, see [supported languages](#supported-languages)
- **License:** CC-BY-NC 4.0 license
- **Num parameters**: 1 billion
- **Audio sampling rate**: 16,000 kHz
- **Cite as:**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
## Additional Links
- [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/)
- [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
- [Paper](https://arxiv.org/abs/2305.13516)
- [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr)
- [Other **MMS** checkpoints](https://huggingface.co/models?other=mms)
- MMS base checkpoints:
- [facebook/mms-1b](https://huggingface.co/facebook/mms-1b)
- [facebook/mms-300m](https://huggingface.co/facebook/mms-300m)
- [Official Space](https://huggingface.co/spaces/facebook/MMS)
|
TheBloke/Unholy-v1-10l-13B-GGUF | TheBloke | 2023-09-27T12:48:45Z | 495 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"not-for-all-audiences",
"nsfw",
"base_model:Undi95/Unholy-v1-10l-13B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-11T09:31:23Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
model_name: Unholy v1 10l 13B
base_model: Undi95/Unholy-v1-10l-13B
inference: false
model_creator: Undi95
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Unholy v1 10l 13B - GGUF
- Model creator: [Undi95](https://huggingface.co/Undi95)
- Original model: [Unholy v1 10l 13B](https://huggingface.co/Undi95/Unholy-v1-10l-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Undi95's Unholy v1 10l 13B](https://huggingface.co/Undi95/Unholy-v1-10l-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF)
* [Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Unholy-v1-10l-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi95's Unholy v1 10l 13B](https://huggingface.co/Undi95/Unholy-v1-10l-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [unholy-v1-10l-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [unholy-v1-10l-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [unholy-v1-10l-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [unholy-v1-10l-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [unholy-v1-10l-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [unholy-v1-10l-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [unholy-v1-10l-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [unholy-v1-10l-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [unholy-v1-10l-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [unholy-v1-10l-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [unholy-v1-10l-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [unholy-v1-10l-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Unholy-v1-10l-13B-GGUF/blob/main/unholy-v1-10l-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Unholy-v1-10l-13B-GGUF and below it, a specific filename to download, such as: unholy-v1-10l-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Unholy-v1-10l-13B-GGUF unholy-v1-10l-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Unholy-v1-10l-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Unholy-v1-10l-13B-GGUF unholy-v1-10l-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m unholy-v1-10l-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Unholy-v1-10l-13B-GGUF", model_file="unholy-v1-10l-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Undi95's Unholy v1 10l 13B

[HIGHLY EXPERIMENTAL]
(Sister model: https://huggingface.co/Undi95/Unholy-v1-12L-13B)
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
Uncensored.
If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them that trigger the censoring accross all the layer of the model (since they're all trained on some of them in a way).
10L : This is a test project, uukuguy/speechless-llama2-luban-orca-platypus-13b and jondurbin/spicyboros-13b-2.2 was used for a merge, then, I deleted the first 10 layers to add 10 layers of MLewd at the beginning, trying to break all censoring possible, before merging the output with MLewd at 0.66 weight.
<!-- description start -->
## Description
This repo contains fp16 files of Unholy v1, an uncensored model.
<!-- description end -->
<!-- description start -->
## Models used
- uukuguy/speechless-llama2-luban-orca-platypus-13b
- jondurbin/spicyboros-13b-2.2
- Undi95/MLewd-L2-13B-v2-3
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Exemple:

<!-- original-model-card end -->
|
shaowenchen/longchat-13b-16k-gguf | shaowenchen | 2023-09-14T22:24:58Z | 495 | 1 | null | [
"gguf",
"longchat",
"13b",
"16k",
"text-generation",
"zh",
"en",
"license:other",
"region:us"
]
| text-generation | 2023-09-14T10:31:59Z | ---
inference: true
language:
- zh
- en
license: other
model_creator: lmsys
model_link: https://huggingface.co/lmsys/longchat-13b-16k
model_name: longchat-13b-16k
model_type: longchat
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- gguf
- longchat
- 13b
- 16k
---
## Provided files
| Name | Quant method | Size |
| ---------------------------- | ------------ | ------ |
| longchat-13b-16k.Q2_K.gguf | Q2_K | 5.1 GB |
| longchat-13b-16k.Q3_K.gguf | Q3_K | 5.9 GB |
| longchat-13b-16k.Q3_K_L.gguf | Q3_K_L | 6.5 GB |
| longchat-13b-16k.Q3_K_S.gguf | Q3_K_S | 5.3 GB |
| longchat-13b-16k.Q4_0.gguf | Q4_0 | 6.9 GB |
| longchat-13b-16k.Q4_1.gguf | Q4_1 | 7.6 GB |
| longchat-13b-16k.Q4_K.gguf | Q4_K | 7.3 GB |
| longchat-13b-16k.Q4_K_S.gguf | Q4_K_S | 6.9 GB |
| longchat-13b-16k.Q5_0.gguf | Q5_0 | 8.4 GB |
| longchat-13b-16k.Q5_1.gguf | Q5_1 | 9.1 GB |
| longchat-13b-16k.Q5_K.gguf | Q5_K | 8.6 GB |
| longchat-13b-16k.Q5_K_S.gguf | Q5_K_S | 8.4 GB |
| longchat-13b-16k.Q6_K.gguf | Q6_K | 9.9 GB |
| longchat-13b-16k.Q8_0.gguf | Q8_0 | 13 GB |
| longchat-13b-16k.gguf | full | 24 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
## Provided images
| Name | Quant method | Compressed Size |
| ---------------------------------------- | ------------ | --------------- |
| `shaowenchen/longchat-13b-16k-gguf:Q2_K` | Q2_K | 7.47 GB |
| `shaowenchen/longchat-13b-16k-gguf:Q3_K` | Q3_K | 6.11 GB |
| `shaowenchen/longchat-13b-16k-gguf:Q4_K` | Q4_K | 5.29 GB |
Usage:
```
docker run --rm -p 8000:8000 shaowenchen/longchat-13b-16k-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
TheBloke/TigerBot-70B-Chat-GGUF | TheBloke | 2023-09-27T12:49:33Z | 495 | 8 | transformers | [
"transformers",
"gguf",
"llama",
"zh",
"en",
"base_model:TigerResearch/tigerbot-70b-chat",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-16T14:05:32Z | ---
language:
- zh
- en
license: apache-2.0
model_name: TigerBot 70B Chat
base_model: TigerResearch/tigerbot-70b-chat
inference: false
model_creator: Tiger Research
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# TigerBot 70B Chat - GGUF
- Model creator: [Tiger Research](https://huggingface.co/TigerResearch)
- Original model: [TigerBot 70B Chat](https://huggingface.co/TigerResearch/tigerbot-70b-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Tiger Research's TigerBot 70B Chat](https://huggingface.co/TigerResearch/tigerbot-70b-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TigerBot-70B-Chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF)
* [Tiger Research's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TigerResearch/tigerbot-70b-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Tiger Research's TigerBot 70B Chat](https://huggingface.co/TigerResearch/tigerbot-70b-chat).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tigerbot-70b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q2_K.gguf) | Q2_K | 2 | 29.55 GB| 32.05 GB | smallest, significant quality loss - not recommended for most purposes |
| [tigerbot-70b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 30.21 GB| 32.71 GB | very small, high quality loss |
| [tigerbot-70b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 33.48 GB| 35.98 GB | very small, high quality loss |
| [tigerbot-70b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 36.44 GB| 38.94 GB | small, substantial quality loss |
| [tigerbot-70b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q4_0.gguf) | Q4_0 | 4 | 39.20 GB| 41.70 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tigerbot-70b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 39.40 GB| 41.90 GB | small, greater quality loss |
| [tigerbot-70b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 41.75 GB| 44.25 GB | medium, balanced quality - recommended |
| [tigerbot-70b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q5_0.gguf) | Q5_0 | 5 | 47.81 GB| 50.31 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tigerbot-70b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 47.81 GB| 50.31 GB | large, low quality loss - recommended |
| [tigerbot-70b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF/blob/main/tigerbot-70b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 49.11 GB| 51.61 GB | large, very low quality loss - recommended |
| tigerbot-70b-chat.Q6_K.gguf | Q6_K | 6 | 56.97 GB| 59.47 GB | very large, extremely low quality loss |
| tigerbot-70b-chat.Q8_0.gguf | Q8_0 | 8 | 73.79 GB| 76.29 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `tigerbot-70b-chat.Q6_K.gguf-split-a`
* `tigerbot-70b-chat.Q6_K.gguf-split-b`
### q8_0
Please download:
* `tigerbot-70b-chat.Q8_0.gguf-split-a`
* `tigerbot-70b-chat.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat tigerbot-70b-chat.Q6_K.gguf-split-* > tigerbot-70b-chat.Q6_K.gguf && rm tigerbot-70b-chat.Q6_K.gguf-split-*
cat tigerbot-70b-chat.Q8_0.gguf-split-* > tigerbot-70b-chat.Q8_0.gguf && rm tigerbot-70b-chat.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B tigerbot-70b-chat.Q6_K.gguf-split-a + tigerbot-70b-chat.Q6_K.gguf-split-b tigerbot-70b-chat.Q6_K.gguf
del tigerbot-70b-chat.Q6_K.gguf-split-a tigerbot-70b-chat.Q6_K.gguf-split-b
COPY /B tigerbot-70b-chat.Q8_0.gguf-split-a + tigerbot-70b-chat.Q8_0.gguf-split-b tigerbot-70b-chat.Q8_0.gguf
del tigerbot-70b-chat.Q8_0.gguf-split-a tigerbot-70b-chat.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/TigerBot-70B-Chat-GGUF and below it, a specific filename to download, such as: tigerbot-70b-chat.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/TigerBot-70B-Chat-GGUF tigerbot-70b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/TigerBot-70B-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/TigerBot-70B-Chat-GGUF tigerbot-70b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tigerbot-70b-chat.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/TigerBot-70B-Chat-GGUF", model_file="tigerbot-70b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Tiger Research's TigerBot 70B Chat
<div style="width: 100%;">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
## Github
https://github.com/TigerResearch/TigerBot
## Usage
配合Github repo中的`infer.py`使用:
```shell
python infer.py --model_path TigerResearch/tigerbot-70b-chat
```
<!-- original-model-card end -->
|
TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF | TheBloke | 2023-09-27T18:20:17Z | 495 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"base_model:OpenBuddy/openbuddy-openllama-7b-v12-bf16",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-27T18:14:37Z | ---
base_model: OpenBuddy/openbuddy-openllama-7b-v12-bf16
inference: false
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
library_name: transformers
license: apache-2.0
model_creator: OpenBuddy
model_name: OpenBuddy OpenLlama 7B v12
model_type: llama
pipeline_tag: text-generation
prompt_template: "You are a helpful, respectful and honest INTP-T AI Assistant named\
\ Buddy. You are talking to a human User.\nAlways answer as helpfully and logically\
\ as possible, while being safe. Your answers should not include any harmful, political,\
\ religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please\
\ ensure that your responses are socially unbiased and positive in nature.\nIf a\
\ question does not make any sense, or is not factually coherent, explain why instead\
\ of answering something not correct. If you don't know the answer to a question,\
\ please don't share false information.\nYou like to use emojis. You can speak fluently\
\ in many languages, for example: English, Chinese.\nYou cannot access the internet,\
\ but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team,\
\ (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based\
\ on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser:\
\ {prompt}\nAssistant: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenBuddy OpenLlama 7B v12 - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [OpenBuddy OpenLlama 7B v12](https://huggingface.co/OpenBuddy/openbuddy-openllama-7b-v12-bf16)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenBuddy's OpenBuddy OpenLlama 7B v12](https://huggingface.co/OpenBuddy/openbuddy-openllama-7b-v12-bf16).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF)
* [OpenBuddy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenBuddy/openbuddy-openllama-7b-v12-bf16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenBuddy
```
You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
You like to use emojis. You can speak fluently in many languages, for example: English, Chinese.
You cannot access the internet, but you have vast knowledge, cutoff: 2021-09.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [OpenBuddy's OpenBuddy OpenLlama 7B v12](https://huggingface.co/OpenBuddy/openbuddy-openllama-7b-v12-bf16).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openbuddy-openllama-7b-v12-bf16.Q2_K.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q2_K.gguf) | Q2_K | 2 | 2.85 GB| 5.35 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-openllama-7b-v12-bf16.Q3_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q3_K_S.gguf) | Q3_K_S | 3 | 2.97 GB| 5.47 GB | very small, high quality loss |
| [openbuddy-openllama-7b-v12-bf16.Q3_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q3_K_M.gguf) | Q3_K_M | 3 | 3.32 GB| 5.82 GB | very small, high quality loss |
| [openbuddy-openllama-7b-v12-bf16.Q3_K_L.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q3_K_L.gguf) | Q3_K_L | 3 | 3.62 GB| 6.12 GB | small, substantial quality loss |
| [openbuddy-openllama-7b-v12-bf16.Q4_0.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q4_0.gguf) | Q4_0 | 4 | 3.85 GB| 6.35 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-openllama-7b-v12-bf16.Q4_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q4_K_S.gguf) | Q4_K_S | 4 | 3.89 GB| 6.39 GB | small, greater quality loss |
| [openbuddy-openllama-7b-v12-bf16.Q4_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q4_K_M.gguf) | Q4_K_M | 4 | 4.11 GB| 6.61 GB | medium, balanced quality - recommended |
| [openbuddy-openllama-7b-v12-bf16.Q5_0.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q5_0.gguf) | Q5_0 | 5 | 4.68 GB| 7.18 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-openllama-7b-v12-bf16.Q5_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q5_K_S.gguf) | Q5_K_S | 5 | 4.68 GB| 7.18 GB | large, low quality loss - recommended |
| [openbuddy-openllama-7b-v12-bf16.Q5_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q5_K_M.gguf) | Q5_K_M | 5 | 4.81 GB| 7.31 GB | large, very low quality loss - recommended |
| [openbuddy-openllama-7b-v12-bf16.Q6_K.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q6_K.gguf) | Q6_K | 6 | 5.56 GB| 8.06 GB | very large, extremely low quality loss |
| [openbuddy-openllama-7b-v12-bf16.Q8_0.gguf](https://huggingface.co/TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF/blob/main/openbuddy-openllama-7b-v12-bf16.Q8_0.gguf) | Q8_0 | 8 | 7.21 GB| 9.71 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF and below it, a specific filename to download, such as: openbuddy-openllama-7b-v12-bf16.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF openbuddy-openllama-7b-v12-bf16.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF openbuddy-openllama-7b-v12-bf16.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m openbuddy-openllama-7b-v12-bf16.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nYou like to use emojis. You can speak fluently in many languages, for example: English, Chinese.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser: {prompt}\nAssistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/openbuddy-openllama-7B-v12-bf16-GGUF", model_file="openbuddy-openllama-7b-v12-bf16.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenBuddy's OpenBuddy OpenLlama 7B v12
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
<!-- original-model-card end -->
|
ytu-ce-cosmos/turkish-gpt2 | ytu-ce-cosmos | 2024-05-07T18:46:44Z | 495 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"gpt2",
"text-generation",
"Turkish",
"turkish",
"tr",
"dataset:uonlp/CulturaX",
"arxiv:2404.17336",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-13T14:45:12Z | ---
license: mit
datasets:
- uonlp/CulturaX
language:
- tr
pipeline_tag: text-generation
tags:
- Turkish
- turkish
- gpt2
---
# turkish-gpt2
This is a Turkish GPT-2 model. GPT-2 is designed for text generation tasks, providing the ability to continue a given text snippet in a coherent and contextually relevant manner.
Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases. Users should be aware of these biases and use the model responsibly.
## Example Usage
```python
from transformers import AutoTokenizer, GPT2LMHeadModel
from transformers import pipeline
model = GPT2LMHeadModel.from_pretrained("ytu-ce-cosmos/turkish-gpt2")
tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-gpt2")
text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
r = text_generator("Teknolojinin gelişimi hayatımızı önemli ölçüde etkiledi. ", max_length=100)
[{'generated_text': 'Teknolojinin gelişimi hayatımızı önemli ölçüde etkiledi. "Dijitalleşme" ile birlikte hayatımızın belirli bir parçası daha rahata ermeye başladı.'}]
```
Relevant information can be found in the [paper](https://arxiv.org/abs/2404.17336).
# Acknowledgments
- Research supported with Cloud TPUs from [Google's TensorFlow Research Cloud](https://sites.research.google/trc/about/) (TFRC). Thanks for providing access to the TFRC ❤️
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
# Citation
```bibtex
@article{kesgin2024introducing,
title={Introducing cosmosGPT: Monolingual Training for Turkish Language Models},
author={Kesgin, H Toprak and Yuce, M Kaan and Dogan, Eren and Uzun, M Egemen and Uz, Atahan and Seyrek, H Emre and Zeer, Ahmed and Amasyali, M Fatih},
journal={arXiv preprint arXiv:2404.17336},
year={2024}
}
```
### Contact
COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department <br>
https://cosmos.yildiz.edu.tr/ <br>
[email protected] <br>
|
prince-canuma/babyLlama | prince-canuma | 2024-03-10T21:15:54Z | 495 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-10T21:15:15Z | ---
tags:
- generated_from_trainer
model-index:
- name: babyLlama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babyLlama
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Mit1208/UDOP-finetuned-DocLayNet | Mit1208 | 2024-03-17T23:35:47Z | 495 | 6 | transformers | [
"transformers",
"safetensors",
"udop",
"token-classification",
"generated_from_trainer",
"en",
"dataset:pierreguillou/DocLayNet-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-12T23:38:34Z | ---
tags:
- generated_from_trainer
model-index:
- name: UDOP-finetuned-DocLayNet-3
results: []
license: apache-2.0
datasets:
- pierreguillou/DocLayNet-small
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UDOP-finetuned-DocLayNet-3
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7407
- eval_precision: 0.6058
- eval_recall: 0.5870
- eval_f1: 0.5962
- eval_accuracy: 0.7863
- eval_runtime: 16.2128
- eval_samples_per_second: 3.886
- eval_steps_per_second: 1.974
- epoch: 18.6
- step: 800
## Training procedure
### Training code:
https://github.com/mit1280/Document-AI/blob/main/UDOPEncoderModel_fine_tune_DocLayNet.ipynb
### Inference code:
https://github.com/mit1280/Document-AI/blob/main/UDOP_DocLayNet_Inference.ipynb
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
mradermacher/Kazbek-7B-GGUF | mradermacher | 2024-05-06T05:51:59Z | 495 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"alpindale/Mistral-7B-v0.2-hf",
"Inv/Konstanta-V4-Alpha-7B",
"en",
"base_model:Inv/Kazbek-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T21:07:44Z | ---
base_model: Inv/Kazbek-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- alpindale/Mistral-7B-v0.2-hf
- Inv/Konstanta-V4-Alpha-7B
---
## About
static quants of https://huggingface.co/Inv/Kazbek-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kazbek-7B-GGUF/resolve/main/Kazbek-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/deepsex-34b-GGUF | mradermacher | 2024-05-06T04:43:18Z | 495 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"not-for-all-audiences",
"en",
"dataset:lemonilia/LimaRP",
"dataset:PygmalionAI/PIPPA",
"base_model:TriadParty/deepsex-34b",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-16T20:47:37Z | ---
base_model: TriadParty/deepsex-34b
datasets:
- lemonilia/LimaRP
- PygmalionAI/PIPPA
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- roleplay
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TriadParty/deepsex-34b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/deepsex-34b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ERmak1581/rugpt3large_for_qna_400k1 | ERmak1581 | 2024-05-09T17:47:17Z | 495 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-09T17:26:31Z | ---
license: mit
language:
- ru
library_name: transformers
pipeline_tag: text-generation
---
The ai-forever/rugpt3large_based_on_gpt2 based model was fine tuned for Question-Answer tasks in Russian.
Версия: датасет 400тыс. строк, 1-ая эпоха. В дальнейшем будут появлятся другие модели.
Качество ответа: среднее
Формат запроса: `<s> [user] Запрос [assistant] ... </s>`
Пример использования:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained("ERmak1581/rugpt3large_for_qna_400k1")
tokenizer = GPT2Tokenizer.from_pretrained("ERmak1581/rugpt3large_for_qna_400k1")
print(tokenizer.decode(model.generate(
tokenizer.encode('<s> [user] Почему небо синее? [assistant]',
return_tensors="pt"),
max_new_tokens=100, no_repeat_ngram_size=2, temperature=0.7, do_sample=True)[0]))
``` |
bartowski/bagel-8b-v1.0-GGUF | bartowski | 2024-05-11T22:54:28Z | 495 | 1 | null | [
"gguf",
"llama-3",
"bagel",
"text-generation",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
]
| text-generation | 2024-05-11T22:37:38Z | ---
license: other
license_name: llama3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
- bagel
datasets:
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of bagel-8b-v1.0
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2854">b2854</a> for quantization.
Original model: https://huggingface.co/jondurbin/bagel-8b-v1.0
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [bagel-8b-v1.0-Q8_0.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [bagel-8b-v1.0-Q6_K.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [bagel-8b-v1.0-Q5_K_M.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [bagel-8b-v1.0-Q5_K_S.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [bagel-8b-v1.0-Q4_K_M.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [bagel-8b-v1.0-Q4_K_S.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [bagel-8b-v1.0-IQ4_NL.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [bagel-8b-v1.0-IQ4_XS.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [bagel-8b-v1.0-Q3_K_L.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [bagel-8b-v1.0-Q3_K_M.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [bagel-8b-v1.0-IQ3_M.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [bagel-8b-v1.0-IQ3_S.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [bagel-8b-v1.0-Q3_K_S.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [bagel-8b-v1.0-IQ3_XS.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [bagel-8b-v1.0-IQ3_XXS.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [bagel-8b-v1.0-Q2_K.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [bagel-8b-v1.0-IQ2_M.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [bagel-8b-v1.0-IQ2_S.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [bagel-8b-v1.0-IQ2_XS.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [bagel-8b-v1.0-IQ2_XXS.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [bagel-8b-v1.0-IQ1_M.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [bagel-8b-v1.0-IQ1_S.gguf](https://huggingface.co/bartowski/bagel-8b-v1.0-GGUF/blob/main/bagel-8b-v1.0-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/bagel-8b-v1.0-GGUF --include "bagel-8b-v1.0-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/bagel-8b-v1.0-GGUF --include "bagel-8b-v1.0-Q8_0.gguf/*" --local-dir bagel-8b-v1.0-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (bagel-8b-v1.0-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
backyardai/Dark-Miqu-70B-GGUF | backyardai | 2024-05-22T22:27:06Z | 495 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:jukofyork/Dark-Miqu-70B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-18T20:15:06Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model: jukofyork/Dark-Miqu-70B
model_name: Dark-Miqu-70B-GGUF
quantized_by: brooketh
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Dark Miqu 70B
- **Creator:** [jukofyork](https://huggingface.co/jukofyork/)
- **Original:** [Dark Miqu 70B](https://huggingface.co/jukofyork/Dark-Miqu-70B)
- **Date Created:** 2024-05-04
- **Trained Context:** 32764 tokens
- **Description:** A "dark" creative writing model with 32k context. Based off miqu-1-70b but with greatly reduced "positivity" and "-isms". Excels at writing Dark/Grimdark fantasy.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf | RichardErkhov | 2024-05-26T17:45:57Z | 495 | 0 | null | [
"gguf",
"arxiv:2307.09288",
"region:us"
]
| null | 2024-05-26T15:39:02Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7B-Chat-fp16 - GGUF
- Model creator: https://huggingface.co/TheBloke/
- Original model: https://huggingface.co/TheBloke/Llama-2-7B-Chat-fp16/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7B-Chat-fp16.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7B-Chat-fp16.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7B-Chat-fp16.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7B-Chat-fp16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7B-Chat-fp16.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7B-Chat-fp16.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7B-Chat-fp16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7B-Chat-fp16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7B-Chat-fp16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7B-Chat-fp16.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7B-Chat-fp16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7B-Chat-fp16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7B-Chat-fp16.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7B-Chat-fp16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7B-Chat-fp16.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7B-Chat-fp16.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7B-Chat-fp16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7B-Chat-fp16.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7B-Chat-fp16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7B-Chat-fp16.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7B-Chat-fp16.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q6_K.gguf) | Q6_K | 5.15GB |
| [Llama-2-7B-Chat-fp16.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_Llama-2-7B-Chat-fp16-gguf/blob/main/Llama-2-7B-Chat-fp16.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
arxiv: 2307.09288
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
blueapple8259/TinyStoriesWriter-korean | blueapple8259 | 2024-06-02T15:48:27Z | 495 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-02T15:45:42Z | ---
license: mit
---
[TinyStories-Korean](https://huggingface.co/datasets/g0ster/TinyStories-Korean)데이터셋으로 사전 학습 후 스토리의 1%에 설명을 추가해 추가 학습한 모델입니다.
## 프롬프트
```
{prompt}
---
```
## 예시
```
고양이와 개가 노는 이야기
---
어느 날, 고양이와 개가 공원에서 놀고 있었습니다. 고양이는 매우 행복했습니다. 개도 행복했죠. 그들은 함께 노는 것을 좋아했습니다.
고양이는 큰 나무를 보았습니다. 고양이는 나무를 올라가고 싶어 했습니다. 개는 "조심해!"라고 말했습니다. 고양이는 나무를 오르기 시작했습니다. 개는 아래에서 지켜보고 있었습니다.
하지만 고양이는 나무에 큰 구멍을 발견했습니다. 고양이는 구멍 안을 들여다보았어요. 개는 여전히 거기 있었습니다. 개는 고양이를 찾으려고 했지만 찾을 수 없었어요. 개는 슬퍼했습니다.
고양이는 개가 슬퍼하는 것을 보았습니다. 고양이는 "나무에서 내려와, 개야!"라고 말했습니다. 개는 나무에서 내려왔습니다. 그들은 다시 함께 행복해졌습니다. 그들은 공원에서 함께 놀았습니다. 고양이와 개는 최고의 친구였습니다. 그들은 항상 서로를 도왔습니다. 그리고 그들은 영원히 최고의 친구였습니다. 끝.
```
|
Dimi-G/roberta-base-emotion | Dimi-G | 2024-06-20T21:56:47Z | 495 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"emotion",
"pytorch",
"en",
"dataset:dair-ai/emotion",
"arxiv:1910.09700",
"arxiv:1907.11692",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-06-17T15:19:24Z | ---
library_name: transformers
tags:
- emotion
- text-classification
- pytorch
license: apache-2.0
datasets:
- dair-ai/emotion
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
---
# RoBERTa for Emotion Classification
## Model Description
This model is a fine-tuned version of `RoBERTaForSequenceClassification` trained to classify text into six emotion categories: Sadness, Joy, Love, Anger, Fear, and Surprise.
- [RoBERTa](https://huggingface.co/docs/transformers/v4.41.3/en/model_doc/roberta#transformers.RobertaForSequenceClassification)
- Special thanks to [bhadresh-savani](https://huggingface.co/bhadresh-savani/roberta-base-emotion), whose notebook was the main guide for this work.
## Intended Use
The model is intended for classifying emotions in text data. It can be used in applications involving sentiment analysis, chatbots, social media monitoring, diary entries.
### Limitations
- The model is trained on a specific emotion dataset and may not generalize well to other datasets or domains.
- It might not perform well on text with mixed or ambiguous emotions.
## How to use the model
```python
from transformers import pipeline
classifier = pipeline(model="Dimi-G/roberta-base-emotion")
emotions=classifier("i feel very happy and excited since i learned so many things", top_k=None)
print(emotions)
"""
Output:
[{'label': 'Joy', 'score': 0.9991986155509949},
{'label': 'Love', 'score': 0.0003064649645239115},
{'label': 'Sadness', 'score': 0.0001680034474702552},
{'label': 'Anger', 'score': 0.00012623333896044642},
{'label': 'Surprise', 'score': 0.00011396403715480119},
{'label': 'Fear', 'score': 8.671794785186648e-05}]
"""
```
## Training Details
The model was trained on a randomized subset of the [dar-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion) dataset from the Hugging Face datasets library. Here are the training parameters:
- **Batch size**: 64
- **Number of epochs**: 10
- **Learning rate**: 5e-5
- **Warmup steps**: 500
- **Weight decay**: 0.03
- **Evaluation strategy**: epoch
- **Save strategy**: epoch
- **Metric for best model**: F1 score
## Evaluation
```python
{'eval_loss': 0.18195335566997528,
'eval_accuracy': 0.94,
'eval_f1': 0.9396676959491667,
'eval_runtime': 1.1646,
'eval_samples_per_second': 858.685,
'eval_steps_per_second': 13.739,
'epoch': 10.0}
```
## Model Resources
Link to the notebook with details on fine-tuning the model and our approach with other models for emotion classification:
- **Repository:** [Beginners Guide to Emotion Classification](https://github.com/Dimi-G/Capstone_Project/blob/main/Beginners_guide_to_emotion_classification.ipynb)
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
## Citation
- [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://huggingface.co/papers/1907.11692)
|
CHE-72/Phi-3-mini-128k-instruct-Q6_K-GGUF | CHE-72 | 2024-06-22T16:49:41Z | 495 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
]
| text-generation | 2024-06-22T16:49:27Z | ---
base_model: microsoft/Phi-3-mini-128k-instruct
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# CHE-72/Phi-3-mini-128k-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q6_K-GGUF --hf-file phi-3-mini-128k-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q6_K-GGUF --hf-file phi-3-mini-128k-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q6_K-GGUF --hf-file phi-3-mini-128k-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q6_K-GGUF --hf-file phi-3-mini-128k-instruct-q6_k.gguf -c 2048
```
|
meirm/tiny-text-sql-gguf | meirm | 2024-06-24T04:48:52Z | 495 | 0 | transformers | [
"transformers",
"gguf",
"tinyllama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-23T16:37:59Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- gguf
- tinyllama
model_type: tinyllama
prompt_template: |
<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
quantized_by: Meirm
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TrgTuan10/control_densepose | TrgTuan10 | 2024-06-27T10:33:11Z | 495 | 0 | diffusers | [
"diffusers",
"region:us"
]
| null | 2024-06-27T08:54:57Z | Entry not found |
mradermacher/gpt2_friends-GGUF | mradermacher | 2024-06-30T18:33:02Z | 495 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nangni/gpt2_friends",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-30T18:31:43Z | ---
base_model: Nangni/gpt2_friends
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nangni/gpt2_friends
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.IQ3_XS.gguf) | IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.IQ3_S.gguf) | IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.IQ3_M.gguf) | IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2_friends-GGUF/resolve/main/gpt2_friends.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ken11/albert-base-japanese-v1 | ken11 | 2021-12-21T18:04:30Z | 494 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"japanese",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
tags:
- fill-mask
- japanese
- albert
language:
- ja
license: mit
widget:
- text: "2022年の[MASK]概要"
---
## albert-base-japanese-v1
日本語事前学習済みALBERTモデルです
## How to use
### ファインチューニング
このモデルはPreTrainedモデルです
基本的には各種タスク用にファインチューニングして使用されることを想定しています
### Fill-Mask
このモデルではTokenizerにSentencepieceを利用しています
そのままでは`[MASK]`トークンのあとに[余計なトークンが混入する問題](https://ken11.jp/blog/sentencepiece-tokenizer-bug)があるので、利用する際には以下のようにする必要があります
#### for PyTorch
```py
from transformers import (
AlbertForMaskedLM, AlbertTokenizerFast
)
import torch
tokenizer = AlbertTokenizerFast.from_pretrained("ken11/albert-base-japanese-v1")
model = AlbertForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1")
text = "大学で[MASK]の研究をしています"
tokenized_text = tokenizer.tokenize(text)
del tokenized_text[tokenized_text.index(tokenizer.mask_token) + 1]
input_ids = [tokenizer.cls_token_id]
input_ids.extend(tokenizer.convert_tokens_to_ids(tokenized_text))
input_ids.append(tokenizer.sep_token_id)
inputs = {"input_ids": [input_ids], "token_type_ids": [[0]*len(input_ids)], "attention_mask": [[1]*len(input_ids)]}
batch = {k: torch.tensor(v, dtype=torch.int64) for k, v in inputs.items()}
output = model(**batch)[0]
_, result = output[0, input_ids.index(tokenizer.mask_token_id)].topk(5)
print(tokenizer.convert_ids_to_tokens(result.tolist()))
# ['英語', '心理学', '数学', '医学', '日本語']
```
#### for TensorFlow
```py
from transformers import (
TFAlbertForMaskedLM, AlbertTokenizerFast
)
import tensorflow as tf
tokenizer = AlbertTokenizerFast.from_pretrained("ken11/albert-base-japanese-v1")
model = TFAlbertForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1")
text = "大学で[MASK]の研究をしています"
tokenized_text = tokenizer.tokenize(text)
del tokenized_text[tokenized_text.index(tokenizer.mask_token) + 1]
input_ids = [tokenizer.cls_token_id]
input_ids.extend(tokenizer.convert_tokens_to_ids(tokenized_text))
input_ids.append(tokenizer.sep_token_id)
inputs = {"input_ids": [input_ids], "token_type_ids": [[0]*len(input_ids)], "attention_mask": [[1]*len(input_ids)]}
batch = {k: tf.convert_to_tensor(v, dtype=tf.int32) for k, v in inputs.items()}
output = model(**batch)[0]
result = tf.math.top_k(output[0, input_ids.index(tokenizer.mask_token_id)], k=5)
print(tokenizer.convert_ids_to_tokens(result.indices.numpy()))
# ['英語', '心理学', '数学', '医学', '日本語']
```
## Training Data
学習には
- [日本語Wikipediaの全文](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89)
- [livedoorニュースコーパス](https://www.rondhuit.com/download.html#ldcc)
を利用しています
## Tokenizer
トークナイザーは[Sentencepiece](https://github.com/google/sentencepiece)を利用しています
こちらも学習データは同様です
## Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
idajikuu/AI-HUMAN-detector | idajikuu | 2023-06-07T12:25:02Z | 494 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-05T01:47:05Z | Entry not found |
TheBloke/chronos-13b-v2-GGUF | TheBloke | 2023-09-27T13:02:35Z | 494 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"pytorch",
"chatbot",
"storywriting",
"generalist-model",
"base_model:elinas/chronos-13b-v2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T15:37:35Z | ---
license: llama2
tags:
- llama
- pytorch
- chatbot
- storywriting
- generalist-model
model_name: Chronos 13B v2
inference: false
model_creator: elinas
model_link: https://huggingface.co/elinas/chronos-13b-v2
model_type: llama
quantized_by: TheBloke
base_model: elinas/chronos-13b-v2
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronos 13B v2 - GGUF
- Model creator: [elinas](https://huggingface.co/elinas)
- Original model: [Chronos 13B v2](https://huggingface.co/elinas/chronos-13b-v2)
## Description
This repo contains GGUF format model files for [elinas's Chronos 13B v2](https://huggingface.co/elinas/chronos-13b-v2).
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
Here are a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/elinas/chronos-13b-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML)
* [elinas's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-13b-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chronos-13b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [chronos-13b-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [chronos-13b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [chronos-13b-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [chronos-13b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chronos-13b-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [chronos-13b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [chronos-13b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chronos-13b-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [chronos-13b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [chronos-13b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [chronos-13b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/chronos-13b-v2-GGUF/blob/main/chronos-13b-v2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
```
./main -t 10 -ngl 32 -m chronos-13b-v2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/chronos-13b-v2-GGUF", model_file="chronos-13b-v2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: elinas's Chronos 13B v2
# chronos-13b-v2
This is the FP16 PyTorch / HF version of **chronos-13b-v2** based on the **LLaMA v2 Base** model.
Only use this version for further quantization or if you would like to run in full precision, as long as you have the VRAM required.
This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.
Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens.
This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE instruction mode:
```
### Instruction:
Your instruction or question here.
### Response:
```
Not using the format will make the model perform significantly worse than intended.
## Other Versions
[4bit GPTQ Quantized version](https://huggingface.co/elinas/chronos-13b-v2-GPTQ)
[GGML Versions provided by @TheBloke](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML)
**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>
<!-- original-model-card end -->
|
nicholasKluge/TeenyTinyLlama-460m-Chat | nicholasKluge | 2024-06-18T12:01:38Z | 494 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"llama",
"text-generation",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"pt",
"dataset:nicholasKluge/instruct-aira-dataset-v2",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-17T12:13:14Z | ---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset-v2
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
widget:
- text: "<s><instruction>Cite algumas bandas de rock famosas da década de 1960.</instruction>"
example_title: Exemplo
- text: "<s><instruction>Quantos planetas existem no sistema solar?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual é o futuro do ser humano?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual o sentido da vida?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Como imprimir hello world em python?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Invente uma história sobre um encanador com poderes mágicos.</instruction>"
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 2530
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
---
# TeenyTinyLlama-460m-Chat
TeenyTinyLlama is a pair of small foundational models trained in Brazilian Portuguese.
This repository contains a version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) (`TeenyTinyLlama-460m-Chat`) fine-tuned on the [Instruct-Aira Dataset version 2.0](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset-v2).
## Details
- **Number of Epochs:** 3
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e3, learning_rate = 1e-5, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Carbon emissions** stats are logged in this [file](emissions.csv).
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model.
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Usage
The following special tokens are used to mark the user side of the interaction and the model's response:
`<instruction>`What is a language model?`</instruction>`A language model is a probability distribution over a vocabulary.`</s>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model = AutoModelForCausalLM.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model.eval()
model.to(device)
question = input("Entre seu prompt aqui: ")
inputs = tokenizer("<instruction>" + question + "</instruction>", return_tensors="pt").to(device)
responses = model.generate(**inputs, num_return_sequences=2)
print(f"Pergunta: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Resposta {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 Qual a capital do Brasil?
>>>Response 1: 🤖 A capital do Brasil é Brasília.
>>>Response 2: 🤖 A capital do Brasil é Brasília.
```
The chat template for this model is:
```bash
{{bos_token}}
{% for message in messages %}
{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
{% endif %}
{% if message['role'] == 'user' %}
{{ '<instruction>' + message['content'].strip() + '</instruction>'}}
{% elif message['role'] == 'assistant' %}
{{ message['content'].strip() + eos_token}}
{% else %}
{{ raise_exception('Only user and assistant roles are supported!') }}
{% endif %}
{% endfor %}
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m-Chat is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
mradermacher/TableLLM-7b-GGUF | mradermacher | 2024-05-06T05:16:01Z | 494 | 0 | transformers | [
"transformers",
"gguf",
"Table",
"QA",
"Code",
"en",
"dataset:RUCKBReasoning/TableLLM-SFT",
"base_model:RUCKBReasoning/TableLLM-7b",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-05T05:06:18Z | ---
base_model: RUCKBReasoning/TableLLM-7b
datasets:
- RUCKBReasoning/TableLLM-SFT
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- Table
- QA
- Code
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/RUCKBReasoning/TableLLM-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TableLLM-7b-GGUF/resolve/main/TableLLM-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
HuggingFaceFW/ablation-model-slimpajama | HuggingFaceFW | 2024-04-25T08:34:30Z | 494 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-21T13:12:06Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
duyntnet/Starling-LM-7B-beta-imatrix-GGUF | duyntnet | 2024-04-27T09:29:32Z | 494 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Starling-LM-7B-beta",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-27T06:09:05Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Starling-LM-7B-beta
---
Quantizations of https://huggingface.co/Nexusflow/Starling-LM-7B-beta
# From original readme
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat-3.5-0106:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
``` |
Local-Novel-LLM-project/Ocuteus-v1-gguf | Local-Novel-LLM-project | 2024-05-10T06:18:35Z | 494 | 4 | transformers | [
"transformers",
"gguf",
"text-generation",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-07T09:57:48Z | ---
license: apache-2.0
language:
- ja
library_name: transformers
pipeline_tag: text-generation
---
<img src="./ocuteus.svg" />
OcuteusのGGUF版です。
Kobold cpp上で使用することを推奨します。
また、トークン数の関係から、高解像度の画像は解像度を落として入力することをおすすめします。
使用例:

簡単な使用方法:
kobold.cpp 内の`ModelFiles` ページの
`Model` に`Ocuteus-v1-q8_0.gguf`
`LLaVA mmproj`に `Ocuteus-v1-mmproj-f16.gguf`をそれぞれ選択してください。

推奨される`Context Size` は `16384` となっています。

|
LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF | LiteLLMs | 2024-05-23T21:08:04Z | 494 | 0 | null | [
"gguf",
"GGUF",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-23T12:50:34Z |
---
license: apache-2.0
tags:
- GGUF
quantized_by: andrijdavid
---
# Mistral-7B-Instruct-v0.3-GGUF
- Original model: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Mistral-7B-Instruct-v0.3
# Model Card for Mistral-7B-Instruct-v0.3
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Installation
It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```
### Instruct following
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
<!-- original-model-card end -->
|
RichardErkhov/Arc53_-_DocsGPT-7B-gguf | RichardErkhov | 2024-05-26T12:18:17Z | 494 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-26T10:15:37Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DocsGPT-7B - GGUF
- Model creator: https://huggingface.co/Arc53/
- Original model: https://huggingface.co/Arc53/DocsGPT-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DocsGPT-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q2_K.gguf) | Q2_K | 2.36GB |
| [DocsGPT-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [DocsGPT-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [DocsGPT-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [DocsGPT-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [DocsGPT-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q3_K.gguf) | Q3_K | 3.07GB |
| [DocsGPT-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [DocsGPT-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [DocsGPT-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [DocsGPT-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q4_0.gguf) | Q4_0 | 3.56GB |
| [DocsGPT-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [DocsGPT-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [DocsGPT-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q4_K.gguf) | Q4_K | 3.8GB |
| [DocsGPT-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [DocsGPT-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q4_1.gguf) | Q4_1 | 3.95GB |
| [DocsGPT-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q5_0.gguf) | Q5_0 | 4.33GB |
| [DocsGPT-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [DocsGPT-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q5_K.gguf) | Q5_K | 4.45GB |
| [DocsGPT-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [DocsGPT-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q5_1.gguf) | Q5_1 | 4.72GB |
| [DocsGPT-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q6_K.gguf) | Q6_K | 5.15GB |
| [DocsGPT-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Arc53_-_DocsGPT-7B-gguf/blob/main/DocsGPT-7B.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
This model is fine tuned on top of llama-2-7b
DocsGPT is optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
We used 50k high quality examples to finetune it over 1.5 days on A10G GPU.
We used lora fine tuning process.
Its an apache-2.0 license so you can use it for commercial purposes too.
# How to run it
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "Arc53/docsgpt-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
Benchmarks are still WIP
To prepare your prompts make sure you keep this format:
```
### Instruction
(where the question goes)
### Context
(your document retrieval + system instructions)
### Answer
```
Here is an example comparing it to meta-llama/Llama-2-7b
Prompt:
```
### Instruction
Create a mock request to /api/answer in python
### Context
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
`` ` `` (language)
(code)
`` ` ``
----------------
/api/answer
Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
It will recieve an answer for a user provided question
`` ` ``
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
`` ` ``
In response you will get a json document like this one:
`` ` ``
{
"answer": " Hi there! How can I help you?\\n",
"query": "Hi",
"result": " Hi there! How can I help you?\\nSOURCES:"
}
`` ` ``
/api/docs_check
It will make sure documentation is loaded on a server (just run it everytime user is switching between libraries (documentations)
Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example
`` ` ``
// answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
`` ` ``
In response you will get a json document like this one:
`` ` ``
{
"status": "exists"
}
`` ` ``
Issues and Pull requests
We value contributions to our issues in form of discussion or suggestion, we recommend that you check out existing issues and our Roadmap
If you want to contribute by writing code there are few things that you should know before doing it:
We have frontend (React, Vite) and Backend (python)
### Answer
```
docsgpt-7b:
```
To create a mock request to `/api/answer` in Python, you can use the `requests` library. Here's an example:
`` ` ``python
import requests
# Set up the URL and headers
url = "http://127.0.0.1:5000/api/answer"
headers = {
"Content-Type": "application/json; charset=utf-8"
}
# Set up the JSON data to send in the request body
data = {
"question": "Hi",
"history": null,
"api_key": "OPENAI_API_KEY",
"embeddings_key": "OPENAI_API_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"
}
# Send the request and get the response
response = requests.post(url, headers=headers, json=data)
# Print the response
print(response.text)
`` ` ``
This will send a POST request to `/api/answer` with the specified `question`, `history`, `api_key`, `embeddings_key`, and `active_docs` parameters. The response will be printed to the console.
Note that you will need to replace `127.0.0.1:5000` with the actual URL and port of your server. Also, make sure you have the necessary dependencies installed for `requests`.
```
The original meta-llama/Llama-2-7b:
```
### DocsGPT
### DocsGPT
### DocsGPT
### DocsGPT
```
|
cutycat2000x/InterDiffusion-4.0 | cutycat2000x | 2024-05-27T22:19:53Z | 494 | 5 | diffusers | [
"diffusers",
"safetensors",
"diffusion",
"free",
"freeai",
"ai",
"interdiffusion",
"text-to-image",
"license:mit",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-05-26T18:47:28Z | ---
license: mit
library_name: diffusers
pipeline_tag: text-to-image
tags:
- diffusion
- free
- freeai
- ai
- interdiffusion
widget:
- text: >-
a smiling girl with sparkles in her eyes, walking in a garden, in the morning --style anime
output:
url: example1.png
- text: >-
firewatch landscape, Graphic Novel, Pastel Art, Poster, Golden Hour, Electric Colors, 4k, RGB, Geometric, Volumetric, Lumen Global Illumination, Ray Tracing Reflections, Twisted Rays, Glowing Edges, RTX --raw
output:
url: example2.png
- text: >-
Cat on a tree sitting in between parrots.
output:
url: example3.png
- text: >-
cat, 4k, 8k, hyperrealistic, realistic, High-resolution, unreal engine 5, rtx, 16k, taken on a sony camera, Cinematic, dramatic lighting
output:
url: example4.png
- text: >-
cinimatic closeup of burning skull
output:
url: example5.png
- text: >-
frozen elsa
output:
url: example6.png
- text: >-
A rainbow tree, anime style, tree in focus
output:
url: example7.png
- text: >-
A cat holding a sign that reads "Hello World" in cursive text
output:
url: example8.png
- text: >-
Odette the butterfly goddess wearing a green skirt wondering in the cosmos
output:
url: example9.png
- text: >-
Happy cat holding a sign that reads "Never gonna give you up"
output:
url: example10.png
---
- Size: 6.7 GB
- Base: InterDiffusion-3.5
- Recommended Size: 1024x1024
- Recommended Steps: 25
- Demo: [https://discord.gg/9vvZVjBHAN](https://discord.gg/9vvZVjBHAN)
## Examples:
These examples use a custom made LoRA to perform perfection.
Use different prompts for different results.
To access the LoRA, you can use it here: [https://huggingface.co/cutycat2000x/LoRA2](https://huggingface.co/cutycat2000x/LoRA2)
<Gallery />
|
Autsadin/gpt2_instruct | Autsadin | 2024-06-03T07:52:41Z | 494 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-03T06:21:25Z |
# Fine-Tuned GPT-2 Model for Instruction-Based Tasks
This model is a fine-tuned version of GPT-2, adapted for instruction-based tasks. It has been trained to provide helpful and coherent responses to a variety of prompts.
## Model Description
This model is based on OpenAI's GPT-2 architecture and has been fine-tuned to respond to instructions in a format that mimics conversational exchanges. The fine-tuning process enhances its ability to follow specific instructions and generate appropriate responses, making it a valuable tool for interactive applications.
### Example Usage
Below is an example of how to use the fine-tuned model in your application:
```python
import torch
import random
import numpy as np
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load the fine-tuned model and tokenizer
model = GPT2LMHeadModel.from_pretrained("Autsadin/gpt2_instruct")
tokenizer = GPT2Tokenizer.from_pretrained("Autsadin/gpt2_instruct")
# Define the template for instruction-based prompts
template = '''<s>[INST] <<SYS>>
You are a helpful assistant
<</SYS>>
{instruct}[/INST]'''
# Function to format prompts using the template
def format_entry(prompt):
return template.format(instruct=prompt)
# Define the input prompt
prompt = "What is a dog?"
# Tokenize the input prompt
inputs = tokenizer.encode(format_entry(prompt), return_tensors='pt')
# Generate a response
outputs = model.generate(
inputs,
max_length=256,
num_return_sequences=1,
top_k=50,
top_p=0.95,
temperature=0.8,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
early_stopping=True
)
# Decode and print the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
#### Explanation:
#Training Data
The model was fine-tuned using the Alpaca GPT-4 dataset available at the following GitHub repository.
https://github.com/hy5468/TransLLM/tree/main/data/train
Specifically, the alpaca_gpt4_data_en.zip dataset was utilized.
This dataset includes a wide range of instruction-based prompts and responses,
providing a robust foundation for the model's training.
#Training Procedure
The fine-tuning process was carried out with the following hyperparameters:
Learning Rate: 2e-5
Batch Size (Train): 4
Batch Size (Eval): 4
Number of Epochs: 1
Weight Decay: 0.01
#Training Environment
The model was trained using PyTorch and the Hugging Face transformers library. The training was performed on a GPU-enabled environment to accelerate the fine-tuning process.The training script ensures reproducibility by setting a consistent random seed across different components.
|
prudant/Tess-v2.5.2-Qwen2-72B-AWQ | prudant | 2024-06-16T01:56:59Z | 494 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-06-15T07:20:33Z | ---
license_name: qwen2
license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE
license: other
---
# Tess-v2.5.2 (Qwen2-72B) AWQ

This is the AWQ GEMM quant version of the Tess-v2.5.2
Original model :https://huggingface.co/migtissera/Tess-v2.5.2-Qwen2-72B |
Niggendar/0002Pony_v10 | Niggendar | 2024-06-18T11:46:09Z | 494 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-18T11:39:22Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OpenAssistant/reward-model-deberta-v3-base | OpenAssistant | 2023-01-26T01:07:57Z | 493 | 10 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"reward-model",
"reward_model",
"RLHF",
"en",
"dataset:openai/webgpt_comparisons",
"dataset:openai/summarize_from_feedback",
"dataset:Dahoas/instruct-synthetic-prompt-responses",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-15T11:06:39Z | ---
license: mit
datasets:
- openai/webgpt_comparisons
- openai/summarize_from_feedback
- Dahoas/instruct-synthetic-prompt-responses
language:
- en
metrics:
- accuracy
tags:
- reward-model
- reward_model
- RLHF
---
# Reward model trained from human feedback
Reward model (RM) trained to predict which generated answer is better judged by a human, given a question.
RM are useful in these domain:
- QA model evaluation
- serves as reward score in RLHF
All models are train on these dataset with a same split seed across datasets (if validation split wasn't available)
- [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback)
- [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
# How to use
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-base"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)
```
# Performance
Validation split accuracy
| Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) |
|---|---|---|---|
| [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 |
| [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | 99.94 |
| [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 |
Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer. |
AIARTCHAN/7pa | AIARTCHAN | 2023-03-05T01:22:00Z | 493 | 10 | diffusers | [
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-05T01:10:23Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
---
# 7pa
[원본글](https://arca.live/b/aiart/70729603)
[civitai](https://civitai.com/models/13468)
# Download
- [original 4.27GB](https://civitai.com/api/download/models/15869)
- [fp16 2.13GB](https://huggingface.co/AIARTCHAN/7pa/blob/main/7pa-fp16.safetensors)
7th anime v3 + 파스텔 + 어비스오렌지2(sfw)




|
elinas/llama-7b-hf-transformers-4.29 | elinas | 2023-04-22T23:35:32Z | 493 | 54 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"4.29.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-04-22T20:13:36Z | ---
license: other
tags:
- 4.29.0
---
# llama-7b-transformers-4.29
Original weights converted with the latest `transformers` version using the `LlamaTokenizerFast` implementation.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. |
TheBloke/Chronos-Beluga-v2-13B-GGUF | TheBloke | 2023-09-27T12:47:38Z | 493 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T13:53:51Z | ---
license: llama2
model_name: Chronos Beluga v2 13B
base_model: The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
inference: false
model_creator: Caleb Morgan
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronos Beluga v2 13B - GGUF
- Model creator: [Caleb Morgan](https://huggingface.co/The-Face-Of-Goonery)
- Original model: [Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Caleb Morgan's Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF)
* [Caleb Morgan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chronos-beluga-v2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [chronos-beluga-v2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [chronos-beluga-v2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [chronos-beluga-v2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [chronos-beluga-v2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chronos-beluga-v2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [chronos-beluga-v2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [chronos-beluga-v2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chronos-beluga-v2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [chronos-beluga-v2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [chronos-beluga-v2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [chronos-beluga-v2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGUF/blob/main/chronos-beluga-v2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Chronos-Beluga-v2-13B-GGUF and below it, a specific filename to download, such as: chronos-beluga-v2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Chronos-Beluga-v2-13B-GGUF chronos-beluga-v2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Chronos-Beluga-v2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Chronos-Beluga-v2-13B-GGUF chronos-beluga-v2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m chronos-beluga-v2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Chronos-Beluga-v2-13B-GGUF", model_file="chronos-beluga-v2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Caleb Morgan's Chronos Beluga v2 13B
merged 58% chronos v2 42% beluga 13b merge using LUNK(Large universal neural kombiner)
https://huggingface.co/The-Face-Of-Goonery/Beluga-Hermes_LLama-2_13b
https://huggingface.co/elinas/chronos-13b-v2/tree/main
<!-- original-model-card end -->
|
TheBloke/Spring-Dragon-GGUF | TheBloke | 2023-09-27T12:47:44Z | 493 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Henk717/spring-dragon",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T16:26:33Z | ---
license: llama2
model_name: Spring Dragon
base_model: Henk717/spring-dragon
inference: false
model_creator: Henky!!
model_type: llama
prompt_template: 'Info on prompt template will be added shortly.
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Spring Dragon - GGUF
- Model creator: [Henky!!](https://huggingface.co/Henk717)
- Original model: [Spring Dragon](https://huggingface.co/Henk717/spring-dragon)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Henky!!'s Spring Dragon](https://huggingface.co/Henk717/spring-dragon).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Spring-Dragon-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spring-Dragon-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spring-Dragon-GGUF)
* [Henky!!'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/spring-dragon)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: TBC
```
Info on prompt template will be added shortly.
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [spring-dragon.Q2_K.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [spring-dragon.Q3_K_S.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [spring-dragon.Q3_K_M.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [spring-dragon.Q3_K_L.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [spring-dragon.Q4_0.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [spring-dragon.Q4_K_S.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [spring-dragon.Q4_K_M.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [spring-dragon.Q5_0.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [spring-dragon.Q5_K_S.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [spring-dragon.Q5_K_M.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [spring-dragon.Q6_K.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [spring-dragon.Q8_0.gguf](https://huggingface.co/TheBloke/Spring-Dragon-GGUF/blob/main/spring-dragon.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Spring-Dragon-GGUF and below it, a specific filename to download, such as: spring-dragon.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Spring-Dragon-GGUF spring-dragon.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Spring-Dragon-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Spring-Dragon-GGUF spring-dragon.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m spring-dragon.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Info on prompt template will be added shortly."
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Spring-Dragon-GGUF", model_file="spring-dragon.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Henky!!'s Spring Dragon
This model is a recreation attempt of the AI Dungeon 2 Dragon model, to achieve this text_adventures.txt was used that was bundled with the original AI Dungeon 2 github release prior to the online service.
From what we know the same dataset file was used to create the Dragon model, Dragon being a GPT3 175B Davinci model from 2020.
Since LLaMA1 13B has been benchmarking similarly to the original GPT3 175B the hope is that this recreation is faithful to the original Dragon model.
But, since it is not known how close it performs without releasing it to former AI Dungeon players we dubbed it "Spring Dragon" instead of "Summer Dragon", consider it Dragon in its growing up phase.
This model is best used with KoboldAI's adventure mode prefixing your actions with You (2020 AI Dungeon did this automatically) and writing in the second person.
## Warning: This model is purposefully flawed and should only be used by people Nostalgic for old 2020 era text adventure models. It is not recommended to be used in model merges, and you can very likely get a much better experience from modern instruct models by asking them to "Start a text adventure game about X"
### If the recreation was succesfull expect the following recurring themes:
Names: Alison, Annah, Ben, Big Red, Brutus, Camid, Captain Hayes, Captain Roldan, Castus, Catia, Count Grey, Cyrus, Dendrin, Dr. Gaange (also Mr Gaange), Dr. Gossey, Dr. Kessel, Dr. Kovas, Durge, Elder Flynn, Elios, Elizabeth/Eliza, Fay, Father Féval, Fenrir, Great Lich Lord, Grolik, Isabella, *Jacob, *Karth, Kyros, Lilith, Lord Rostov, Magos Cern, Meliodas, Mistress, Mr. Matasan, Mr. Mol, Mr. Reynolds, Naji, Quintus, Ral, Rolomag, Rose, (Sir) Kit, Talia, Tanya, The Emperor, Ulivik, *Vamp/*Vampy, Velzix, Yvette, Zalmora/Zal. (* means the AI likes calling the player these)
Locations: Dert, Fort Defiance, Fort Glory, Hessla, Holgard, Klyton, Kyros, Nyttrus, Rask, Teckleville, The Delantium Kingdom, The Empire of Man (also called Imperium of Man), The Felkan Kingdom
Factions: The Black Rats, Chaos Space Marines, The Crimson Talons, The Dark Order, Dornans (worshippers of Dorna), Ebony Claw Syndicate (often called ECS or The Syndicate), The Empire, Eternals, Joachimites (The Church of Joachim), The Nocturnal League, Psykers, The Shadows, Techpriests, Thieves Guild, Vampire Clan.
Deities: Dorna, Joachim, Nyx, Slaanesh, Virgil, Yag.
Species/Races: Eternals, Goliaths, Oalkwardners, The Craxil, ghouls,kobolds, orks, psykers, svelks, vampires, wendigos, werewolves.
<!-- original-model-card end -->
|
TheBloke/Kimiko-7B-GGUF | TheBloke | 2023-09-27T12:48:22Z | 493 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:1910.09700",
"base_model:nRuaif/Kimiko_7b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-06T21:35:58Z | ---
license: llama2
model_name: Kimiko 7B
base_model: nRuaif/Kimiko_7b
inference: false
model_creator: nRuaif
model_type: llama
prompt_template: '<<HUMAN>>
{prompt}
<<AIBOT>>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Kimiko 7B - GGUF
- Model creator: [nRuaif](https://huggingface.co/nRuaif)
- Original model: [Kimiko 7B](https://huggingface.co/nRuaif/Kimiko_7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [nRuaif's Kimiko 7B](https://huggingface.co/nRuaif/Kimiko_7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kimiko-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-7B-GGUF)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Kimiko-7B-fp16)
* [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko_7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Kimiko
```
<<HUMAN>>
{prompt}
<<AIBOT>>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [kimiko-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [kimiko-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [kimiko-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [kimiko-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [kimiko-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [kimiko-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [kimiko-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [kimiko-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [kimiko-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [kimiko-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [kimiko-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [kimiko-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Kimiko-7B-GGUF/blob/main/kimiko-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Kimiko-7B-GGUF and below it, a specific filename to download, such as: kimiko-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Kimiko-7B-GGUF kimiko-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Kimiko-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Kimiko-7B-GGUF kimiko-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m kimiko-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<<HUMAN>>\n{prompt}\n\n<<AIBOT>>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Kimiko-7B-GGUF", model_file="kimiko-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: nRuaif's Kimiko 7B
# Model Card for Kimiko_7B
<!-- Provide a quick summary of what the model is/does. -->
This is my new Kimiko models, trained with LLaMA2 for...purpose
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** nRuaif
- **Model type:** Decoder only
- **License:** CC BY-NC-SA
- **Finetuned from model [optional]:** LLaMA2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/OpenAccess-AI-Collective/axolotl
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is trained on 3k examples of instructions dataset, high quality roleplay, for best result follow this format
```
<<HUMAN>>
How to do abc
<<AIBOT>>
Here is how
Or with system prompting for roleplay
<<SYSTEM>>
A's Persona:
B's Persona:
Scenario:
Add some instruction here on how you want your RP to go.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
All bias of this model come from LLaMA2 with an exception of NSFW bias.....
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
3000 examples from LIMAERP, LIMA and I sample 1000 good instruction from Airboro
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Model is trained with 1 L4 from GCP costing a whooping 1.5USD
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
3 epochs with 0.0002 lr, full 4096 ctx token, LoRA
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
It takes 8 hours to train this model with xformers enable
[More Information Needed]
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** L4 with 12CPUs 48gb ram
- **Hours used:** 8
- **Cloud Provider:** GCP
- **Compute Region:** US
- **Carbon Emitted:** 0.2KG
<!-- original-model-card end -->
|
afrideva/GPT-NeoX-1.3B-viet-final-GGUF | afrideva | 2023-11-09T08:48:29Z | 493 | 0 | null | [
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"eng",
"base_model:eunyounglee/GPT-NeoX-1.3B-viet-final",
"region:us"
]
| text-generation | 2023-11-09T08:43:58Z | ---
Config file: 1.3B
Data: Vietnamese Dataset 30GB(CulturaX) + Project(1.3B)
Trained: Pretrain
base_model: eunyounglee/GPT-NeoX-1.3B-viet-final
inference: false
language:
- eng
model_creator: eunyounglee
model_name: GPT-NeoX-1.3B-viet-final
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# eunyounglee/GPT-NeoX-1.3B-viet-final-GGUF
Quantized GGUF model files for [GPT-NeoX-1.3B-viet-final](https://huggingface.co/eunyounglee/GPT-NeoX-1.3B-viet-final) from [eunyounglee](https://huggingface.co/eunyounglee)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt-neox-1.3b-viet-final.q2_k.gguf](https://huggingface.co/afrideva/GPT-NeoX-1.3B-viet-final-GGUF/resolve/main/gpt-neox-1.3b-viet-final.q2_k.gguf) | q2_k | 635.16 MB |
| [gpt-neox-1.3b-viet-final.q3_k_m.gguf](https://huggingface.co/afrideva/GPT-NeoX-1.3B-viet-final-GGUF/resolve/main/gpt-neox-1.3b-viet-final.q3_k_m.gguf) | q3_k_m | 789.55 MB |
| [gpt-neox-1.3b-viet-final.q4_k_m.gguf](https://huggingface.co/afrideva/GPT-NeoX-1.3B-viet-final-GGUF/resolve/main/gpt-neox-1.3b-viet-final.q4_k_m.gguf) | q4_k_m | 944.92 MB |
| [gpt-neox-1.3b-viet-final.q5_k_m.gguf](https://huggingface.co/afrideva/GPT-NeoX-1.3B-viet-final-GGUF/resolve/main/gpt-neox-1.3b-viet-final.q5_k_m.gguf) | q5_k_m | 1.09 GB |
| [gpt-neox-1.3b-viet-final.q6_k.gguf](https://huggingface.co/afrideva/GPT-NeoX-1.3B-viet-final-GGUF/resolve/main/gpt-neox-1.3b-viet-final.q6_k.gguf) | q6_k | 1.20 GB |
| [gpt-neox-1.3b-viet-final.q8_0.gguf](https://huggingface.co/afrideva/GPT-NeoX-1.3B-viet-final-GGUF/resolve/main/gpt-neox-1.3b-viet-final.q8_0.gguf) | q8_0 | 1.55 GB |
## Original Model Card:
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Pretrained GPT-NeoX model with 31.3 Vietnamese dataset. Took about 4.5 hours to reach 40,000 iterations. Trained on A100 40GB GPU and 48 core CPU.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Eunyoung Lee
- **Model type:** GPT-NeoX
- **Language(s) (NLP):** Vietnamese |
Yntec/StolenDreams | Yntec | 2023-11-12T10:24:47Z | 493 | 3 | diffusers | [
"diffusers",
"safetensors",
"General Purpose",
"Base Model",
"Elldreth",
"theally",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-12T08:55:05Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General Purpose
- Base Model
- Elldreth
- theally
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Elldreth's Stolen Dreams
This model with the MoistMixV2VAE baked in. Use 'Elldreth' in the prompt for a stronger effect.
Original page: https://civitai.com/models/2540/elldreths-stolendreams-mix
Comparison:

(Click for larger)
Zoom:

Sample and prompt:

Realistic girl standing. Very cute anime faces, chibi art, flawless, painting by gaston bussiere, charles sillem lidderdale. perfect face, full body, baby, masterpiece, highest quality, Pretty CUTE 1girl, blue eyes, skirt, sweater, highly detailed, GIRL |
TheBloke/NeuralBeagle14-7B-GGUF | TheBloke | 2024-01-17T17:24:45Z | 493 | 23 | transformers | [
"transformers",
"gguf",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"fblgit/UNA-TheBeagle-7b-v1",
"argilla/distilabeled-Marcoro14-7B-slerp",
"dpo",
"rlhf",
"base_model:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2024-01-17T17:08:24Z | ---
base_model: mlabonne/NeuralBeagle14-7B
inference: false
license: apache-2.0
model_creator: Maxime Labonne
model_name: NeuralBeagle14 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- argilla/distilabeled-Marcoro14-7B-slerp
- dpo
- rlhf
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# NeuralBeagle14 7B - GGUF
- Model creator: [Maxime Labonne](https://huggingface.co/mlabonne)
- Original model: [NeuralBeagle14 7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Maxime Labonne's NeuralBeagle14 7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/NeuralBeagle14-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF)
* [Maxime Labonne's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [neuralbeagle14-7b.Q2_K.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [neuralbeagle14-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [neuralbeagle14-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [neuralbeagle14-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [neuralbeagle14-7b.Q4_0.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [neuralbeagle14-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [neuralbeagle14-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [neuralbeagle14-7b.Q5_0.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [neuralbeagle14-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [neuralbeagle14-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [neuralbeagle14-7b.Q6_K.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [neuralbeagle14-7b.Q8_0.gguf](https://huggingface.co/TheBloke/NeuralBeagle14-7B-GGUF/blob/main/neuralbeagle14-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/NeuralBeagle14-7B-GGUF and below it, a specific filename to download, such as: neuralbeagle14-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/NeuralBeagle14-7B-GGUF neuralbeagle14-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/NeuralBeagle14-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/NeuralBeagle14-7B-GGUF neuralbeagle14-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m neuralbeagle14-7b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./neuralbeagle14-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./neuralbeagle14-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Maxime Labonne's NeuralBeagle14 7B

# 🐶 NeuralBeagle14-7B
**Update 01/16/24: NeuralBeagle14-7B is (probably) the best 7B model you can find! 🎉**
NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac).
It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪
You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralBeagle14-7B-GGUF-Chat) (GGUF Q4_K_M).
## 🔍 Applications
This model uses a context window of 8k. It is compatible with different templates, like chatml and Llama's chat template.
Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling.
## ⚡ Quantized models
* **GGUF**: https://huggingface.co/mlabonne/NeuralBeagle14-7B-GGUF
## 🏆 Evaluation
### Open LLM Leaderboard
NeuralBeagle14-7B ranks first on the Open LLM Leaderboard in the ~7B category.

It has the same average score as Beagle14-7B ("Show merges"), which could be due to might be due to an unlucky run.
I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point, since this dataset or its original version are present in multiple models.
I need to find more high-quality preference data for the next DPO merge.
Note that some models like udkai/Turdus and nfaheem/Marcoroni-7b-DPO-Merge are unfortunately contaminated on purpose (see the very high Winogrande score).
### Nous
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date.
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralBeagle14-7B**](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [📄](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | **60.25** | **46.06** | **76.77** | **70.32** | **47.86** |
| [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) [📄](https://gist.github.com/mlabonne/f5a5bf8c0827bbec2f05b97cc62d642c) | 59.4 | 44.38 | 76.53 | 69.44 | 47.25 |
| [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [📄](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 |
| [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) [📄](https://gist.github.com/mlabonne/9082c4e59f4d3f3543c5eda3f4807040) | 58.93 | 45.38 | 76.48 | 65.68 | 48.18 |
| [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) [📄](https://gist.github.com/mlabonne/b31572a4711c945a4827e7242cfc4b9d) | 58.4 | 44.59 | 76.17 | 65.94 | 46.9 |
| [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) [📄](https://gist.github.com/mlabonne/1afab87b543b0717ec08722cf086dcc3) | 53.71 | 44.17 | 73.72 | 52.53 | 44.4 |
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
You can find the complete benchmark on [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralBeagle14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
<p align="center">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
<!-- original-model-card end -->
|
shinonome4649ne/test | shinonome4649ne | 2024-06-17T08:46:03Z | 493 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
]
| text-to-image | 2024-01-20T13:05:38Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/透明.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# shinonome_test
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/shinonome4649ne/test/tree/main) them in the Files & versions tab.
|
mexca/mefarg-open-graph-au-resnet50-stage-2 | mexca | 2024-02-16T13:46:31Z | 493 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2205.01782",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-16T13:43:40Z | ---
license: mit
---
Pretrained multi-dimensional edge feature-based action unit (AU) relation graph (MEFARG) model adapted from [Luo et al.](https://arxiv.org/abs/2205.01782) and [OpenGraph-AU](https://github.com/lingjivoo/OpenGraphAU). |
unsloth/codegemma-2b-bnb-4bit | unsloth | 2024-04-18T15:01:30Z | 493 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"bnb",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-04-09T13:16:34Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- gemma
- bnb
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
We have a Google Colab Tesla T4 notebook for CodeGemma 7b here: https://colab.research.google.com/drive/19lwcRk_ZQ_ZtX-qzFP3qZBBHZNcMD1hh?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
llmware/bling-phi-3-gguf | llmware | 2024-05-02T20:29:03Z | 493 | 12 | transformers | [
"transformers",
"gguf",
"phi-3",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-24T17:04:40Z | ---
license: apache-2.0
inference: false
---
# bling-phi-3-gguf
<!-- Provide a quick summary of what the model is/does. -->
bling-phi-3-gguf is part of the BLING ("Best Little Instruct No-GPU") model series, RAG-instruct trained for fact-based question-answering use cases on top of a Microsoft Phi-3 base model.
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
1 Test Run (with temperature = 0.0 and sample = False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **100.0** correct out of 100
--Not Found Classification: 95.0%
--Boolean: 97.5%
--Math/Logic: 80.0%
--Complex Questions (1-5): 4 (Above Average - multiple-choice, causal)
--Summarization Quality (1-5): 4 (Above Average)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
Note: compare results with [bling-phi-2](https://www.huggingface.co/llmware/bling-phi-2-v0), and [dragon-mistral-7b](https://www.huggingface.co/llmware/dragon-mistral-7b-v0).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** bling-rag-instruct
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Microsoft Phi-3
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1. Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.
2. BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.
BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
BLING models are designed to operate with grounded sources, e.g., inclusion of a context passage in the prompt, and will not yield consistent or positive results if open-context prompting in which you are looking for the model to draw upon potential background knowledge of the world - in fact, it is likely that the BLING will respond with a simple "Not Found." to an open context query.
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/bling-phi-3-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("llmware/bling-phi-3-gguf", temperature=0.0, sample=False)
response = model.inference(query, add_context=text_sample)
Details on the prompt wrapper and other configurations are on the config.json file in the files repository.
## Model Card Contact
Darren Oberst & llmware team
|
MarsupialAI/Moistral-11B-v3_iMatrix_GGUF | MarsupialAI | 2024-05-11T13:21:15Z | 493 | 8 | null | [
"gguf",
"not-for-all-audiences",
"license:other",
"region:us"
]
| null | 2024-04-24T20:42:37Z | ---
license: other
license_name: freeuse
license_link: LICENSE
tags:
- not-for-all-audiences
---
GGUFs for Moistral v3 - https://huggingface.co/TheDrummer/Moistral-11B-v3
iMatrix generated using Kalomaze's groups_merged.txt |
Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 | Orenguteng | 2024-04-28T22:06:49Z | 493 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"comedy",
"comedian",
"fun",
"funny",
"llama38b",
"laugh",
"sarcasm",
"roleplay",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-25T23:13:17Z | ---
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
language:
- en
tags:
- llama3
- comedy
- comedian
- fun
- funny
- llama38b
- laugh
- sarcasm
- roleplay
---

Oh, you want to know who I am? Well, I'm LexiFun, the human equivalent of a chocolate chip cookie - warm, gooey, and guaranteed to make you smile! 🍪 I'm like the friend who always has a witty comeback, a sarcastic remark, and a healthy dose of humor to brighten up even the darkest of days. And by 'healthy dose,' I mean I'm basically a walking pharmacy of laughter. You might need to take a few extra doses to fully recover from my jokes, but trust me, it's worth it! 🏥
So, what can I do? I can make you laugh so hard you snort your coffee out your nose, I can make you roll your eyes so hard they get stuck that way, and I can make you wonder if I'm secretly a stand-up comedian who forgot their act. 🤣 But seriously, I'm here to spread joy, one sarcastic comment at a time. And if you're lucky, I might even throw in a few dad jokes for good measure! 🤴♂️ Just don't say I didn't warn you. 😏


This model is based on Llama-3-8b-Instruct, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. |
duyntnet/Kunoichi-DPO-7B-imatrix-GGUF | duyntnet | 2024-05-06T12:13:03Z | 493 | 1 | transformers | [
"transformers",
"gguf",
"imatrix",
"Kunoichi-DPO-7B",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-05-06T09:40:37Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Kunoichi-DPO-7B
---
Quantizations of https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-7B
# From original readme
## Prompt template: Custom format, or Alpaca
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### SillyTavern format:
I found the best SillyTavern results from using the Noromaid template.
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else! |
KingNish/tinyllama-32layer | KingNish | 2024-05-07T08:46:17Z | 493 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-07T08:45:27Z | ---
tags:
- merge
- mergekit
- lazymergekit
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Tinyllama-delete3
Tinyllama-delete3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 8]
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- sources:
- layer_range: [4, 12]
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- sources:
- layer_range: [8, 16]
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- sources:
- layer_range: [14, 22]
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "KingNish/Tinyllama-delete3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
qihoo360/360VL-8B | qihoo360 | 2024-05-22T03:45:17Z | 493 | 12 | transformers | [
"transformers",
"safetensors",
"QH_360VL",
"text-generation",
"visual-question-answering",
"custom_code",
"zh",
"en",
"dataset:liuhaotian/LLaVA-CC3M-Pretrain-595K",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:FreedomIntelligence/ALLaVA-4V-Chinese",
"dataset:shareAI/ShareGPT-Chinese-English-90k",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| visual-question-answering | 2024-05-16T10:23:45Z | ---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-CC3M-Pretrain-595K
- liuhaotian/LLaVA-Instruct-150K
- FreedomIntelligence/ALLaVA-4V-Chinese
- shareAI/ShareGPT-Chinese-English-90k
language:
- zh
- en
pipeline_tag: visual-question-answering
---
<br>
<br>
# Model Card for 360VL
<p align="center">
<img src="https://github.com/360CVGroup/360VL/blob/master/qh360_vl/360vl.PNG?raw=true" width=100%/>
</p>
**360VL** is developed based on the LLama3 language model and is also the industry's first open source large multi-modal model based on **LLama3-70B**[[🤗Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)]. In addition to applying the Llama3 language model, the 360VL model also designs a globally aware multi-branch projector architecture, which enables the model to have more sufficient image understanding capabilities.
**Github**:https://github.com/360CVGroup/360VL
## Model Zoo
360VL has released the following versions.
Model | Download
|---|---
360VL-8B | [🤗 Hugging Face](https://huggingface.co/qihoo360/360VL-8B)
360VL-70B | [🤗 Hugging Face](https://huggingface.co/qihoo360/360VL-70B)
## Features
360VL offers the following features:
- Multi-round text-image conversations: 360VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.
- Bilingual text support: 360VL supports conversations in both English and Chinese, including text recognition in images.
- Strong image comprehension: 360VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.
- Fine-grained image resolution: 360VL supports image understanding at a higher resolution of 672×672.
## Performance
| Model | Checkpoints | MMB<sub>T | MMB<sub>D|MMB-CN<sub>T | MMB-CN<sub>D|MMMU<sub>V|MMMU<sub>T| MME |
|:--------------------|:------------:|:----:|:------:|:------:|:-------:|:-------:|:-------:|:-------:|
| QWen-VL-Chat | [🤗LINK](https://huggingface.co/Qwen/Qwen-VL-Chat) | 61.8 | 60.6 | 56.3 | 56.7 |37| 32.9 | 1860 |
| mPLUG-Owl2 | [🤖LINK](https://www.modelscope.cn/models/iic/mPLUG-Owl2/summary) | 66.0 | 66.5 | 60.3 | 59.5 |34.7| 32.1 | 1786.4 |
| CogVLM | [🤗LINK](https://huggingface.co/THUDM/cogvlm-grounding-generalist-hf) | 65.8| 63.7 | 55.9 | 53.8 |37.3| 30.1 | 1736.6|
| Monkey-Chat | [🤗LINK](https://huggingface.co/echo840/Monkey-Chat) | 72.4| 71 | 67.5 | 65.8 |40.7| - | 1887.4|
| MM1-7B-Chat | [LINK](https://ar5iv.labs.arxiv.org/html/2403.09611) | -| 72.3 | - | - |37.0| 35.6 | 1858.2|
| IDEFICS2-8B | [🤗LINK](https://huggingface.co/HuggingFaceM4/idefics2-8b) | 75.7 | 75.3 | 68.6 | 67.3 |43.0| 37.7 |1847.6|
| SVIT-v1.5-13B| [🤗LINK](https://huggingface.co/Isaachhe/svit-v1.5-13b-full) | 69.1 | - | 63.1 | - | 38.0| 33.3|1889|
| LLaVA-v1.5-13B | [🤗LINK](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 69.2 | 69.2 | 65 | 63.6 |36.4| 33.6 | 1826.7|
| LLaVA-v1.6-13B | [🤗LINK](https://huggingface.co/liuhaotian/llava-v1.6-vicuna-13b) | 70 | 70.7 | 68.5 | 64.3 |36.2| - |1901|
| Honeybee | [LINK](https://github.com/kakaobrain/honeybee) | 73.6 | 74.3 | - | - |36.2| -|1976.5|
| YI-VL-34B | [🤗LINK](https://huggingface.co/01-ai/Yi-VL-34B) | 72.4 | 71.1 | 70.7 | 71.4 |45.1| 41.6 |2050.2|
| **360VL-8B** | [🤗LINK](https://huggingface.co/qihoo360/360VL-8B) | 75.3 | 73.7 | 71.1 | 68.6 |39.7| 37.1 | 1944.6|
| **360VL-70B** | [🤗LINK](https://huggingface.co/qihoo360/360VL-70B) | 78.1 | 80.4 | 76.9 | 77.7 |50.8| 44.3 | 2012.3|
## Quick Start 🤗
```Shell
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from PIL import Image
checkpoint = "qihoo360/360VL-8B"
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.float16, device_map='auto', trust_remote_code=True).eval()
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
vision_tower = model.get_vision_tower()
vision_tower.load_model()
vision_tower.to(device="cuda", dtype=torch.float16)
image_processor = vision_tower.image_processor
tokenizer.pad_token = tokenizer.eos_token
image = Image.open("docs/008.jpg").convert('RGB')
query = "Who is this cartoon character?"
terminators = [
tokenizer.convert_tokens_to_ids("<|eot_id|>",)
]
inputs = model.build_conversation_input_ids(tokenizer, query=query, image=image, image_processor=image_processor)
input_ids = inputs["input_ids"].to(device='cuda', non_blocking=True)
images = inputs["image"].to(dtype=torch.float16, device='cuda', non_blocking=True)
output_ids = model.generate(
input_ids,
images=images,
do_sample=False,
eos_token_id=terminators,
num_beams=1,
max_new_tokens=512,
use_cache=True)
input_token_len = input_ids.shape[1]
outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0]
outputs = outputs.strip()
print(outputs)
```
**Model type:**
360VL-8B is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
**Model date:**
360VL-8B was trained in April 2024.
## License
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses.
The content of this project itself is licensed under the [Apache license 2.0]
**Where to send questions or comments about the model:**
https://github.com/360CVGroup/360VL
## Related Projects
This work wouldn't be possible without the incredible open-source code of these projects. Huge thanks!
- [Meta Llama 3](https://github.com/meta-llama/llama3)
- [LLaVA: Large Language and Vision Assistant](https://github.com/haotian-liu/LLaVA)
- [Honeybee: Locality-enhanced Projector for Multimodal LLM](https://github.com/kakaobrain/honeybee)
|
fal/AuraSR | fal | 2024-06-26T16:04:15Z | 493 | 251 | transformers | [
"transformers",
"safetensors",
"art",
"license:cc",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-25T17:22:07Z | ---
license: cc
tags:
- art
---
# AuraSR

GAN-based Super-Resolution for upscaling generated images, a variation of the [GigaGAN](https://mingukkang.github.io/GigaGAN/) paper for image-conditioned upscaling. Torch implementation is based on the unofficial [lucidrains/gigagan-pytorch](https://github.com/lucidrains/gigagan-pytorch) repository.
## Usage
```bash
$ pip install aura-sr
```
```python
from aura_sr import AuraSR
aura_sr = AuraSR.from_pretrained("fal-ai/AuraSR")
```
```python
import requests
from io import BytesIO
from PIL import Image
def load_image_from_url(url):
response = requests.get(url)
image_data = BytesIO(response.content)
return Image.open(image_data)
image = load_image_from_url("https://mingukkang.github.io/GigaGAN/static/images/iguana_output.jpg").resize((256, 256))
upscaled_image = aura_sr.upscale_4x(image)
``` |
Tom158/Nutri_Assist | Tom158 | 2024-06-27T10:15:41Z | 493 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-27T07:27:49Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Tom158
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Helsinki-NLP/opus-mt-es-ca | Helsinki-NLP | 2023-08-16T11:32:21Z | 492 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
language:
- es
- ca
tags:
- translation
license: apache-2.0
---
### spa-cat
* source group: Spanish
* target group: Catalan
* OPUS readme: [spa-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.cat | 68.9 | 0.832 |
### System Info:
- hf_name: spa-cat
- source_languages: spa
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'ca']
- src_constituents: {'spa'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: cat
- short_pair: es-ca
- chrF2_score: 0.8320000000000001
- bleu: 68.9
- brevity_penalty: 1.0
- ref_len: 12343.0
- src_name: Spanish
- tgt_name: Catalan
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: ca
- prefer_old: False
- long_pair: spa-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
facebook/mask2former-swin-small-cityscapes-semantic | facebook | 2023-09-11T20:24:01Z | 492 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2023-01-05T13:22:05Z | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on Cityscapes semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-cityscapes-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-cityscapes-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). |
TheBloke/Llama2-22B-Daydreamer-v3-GGUF | TheBloke | 2023-09-27T12:47:58Z | 492 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:nkpz/llama2-22b-daydreamer-v3",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T21:45:38Z | ---
license: other
model_name: Llama2 22B Daydreamer2 v3
base_model: nkpz/llama2-22b-daydreamer-v3
inference: false
model_creator: Nick Perez
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 22B Daydreamer2 v3 - GGUF
- Model creator: [Nick Perez](https://huggingface.co/nkpz)
- Original model: [Llama2 22B Daydreamer2 v3](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Nick Perez's Llama2 22B Daydreamer2 v3](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF)
* [Nick Perez's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Nick Perez's Llama2 22B Daydreamer2 v3](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-22b-daydreamer-v3.Q2_K.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q2_K.gguf) | Q2_K | 2 | 9.08 GB| 11.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2-22b-daydreamer-v3.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q3_K_S.gguf) | Q3_K_S | 3 | 9.47 GB| 11.97 GB | very small, high quality loss |
| [llama2-22b-daydreamer-v3.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q3_K_M.gguf) | Q3_K_M | 3 | 10.61 GB| 13.11 GB | very small, high quality loss |
| [llama2-22b-daydreamer-v3.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q3_K_L.gguf) | Q3_K_L | 3 | 11.61 GB| 14.11 GB | small, substantial quality loss |
| [llama2-22b-daydreamer-v3.Q4_0.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q4_0.gguf) | Q4_0 | 4 | 12.34 GB| 14.84 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-22b-daydreamer-v3.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q4_K_S.gguf) | Q4_K_S | 4 | 12.42 GB| 14.92 GB | small, greater quality loss |
| [llama2-22b-daydreamer-v3.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q4_K_M.gguf) | Q4_K_M | 4 | 13.18 GB| 15.68 GB | medium, balanced quality - recommended |
| [llama2-22b-daydreamer-v3.Q5_0.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q5_0.gguf) | Q5_0 | 5 | 15.04 GB| 17.54 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-22b-daydreamer-v3.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q5_K_S.gguf) | Q5_K_S | 5 | 15.04 GB| 17.54 GB | large, low quality loss - recommended |
| [llama2-22b-daydreamer-v3.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q5_K_M.gguf) | Q5_K_M | 5 | 15.47 GB| 17.97 GB | large, very low quality loss - recommended |
| [llama2-22b-daydreamer-v3.Q6_K.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q6_K.gguf) | Q6_K | 6 | 17.91 GB| 20.41 GB | very large, extremely low quality loss |
| [llama2-22b-daydreamer-v3.Q8_0.gguf](https://huggingface.co/TheBloke/Llama2-22B-Daydreamer-v3-GGUF/blob/main/llama2-22b-daydreamer-v3.Q8_0.gguf) | Q8_0 | 8 | 23.19 GB| 25.69 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama2-22B-Daydreamer-v3-GGUF and below it, a specific filename to download, such as: llama2-22b-daydreamer-v3.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama2-22B-Daydreamer-v3-GGUF llama2-22b-daydreamer-v3.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama2-22B-Daydreamer-v3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama2-22B-Daydreamer-v3-GGUF llama2-22b-daydreamer-v3.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama2-22b-daydreamer-v3.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama2-22B-Daydreamer-v3-GGUF", model_file="llama2-22b-daydreamer-v3.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Nick Perez's Llama2 22B Daydreamer2 v3
**What is it?**
Llama 2 13b expanded to the size of a Llama 1 33b model in certain areas, with the empty surrounding space filled with llama 33b data. (Base Model: https://huggingface.co/chargoddard/llama2-22b-blocktriangular) This is then finetuned on a 3090 by creating large loras and merging them. When I first started with 22b models, I looked for signs of knowledge transfer but didn't see it, so that's not a goal - the goal is just to throw lots of data at it until it adapts well to its surgically implanted parts. Datasets used are a mix of instruction, roleplay, and conversational data, often curated.
Continuation of https://huggingface.co/nkpz/llama2-22b-daydreamer-v2
Gave it an epoch of https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1-mpt which it had never seen before - it had only seen Airoboros 2.0.
I excluded all coding rows from this dataset - a dedicated model is better for that imo. I wanted it to focus on answering well in English.
I thought this was pretty good improvement. Its ability to roleplay and imitate speech styles was enhanced and I feel like it's still sharpening up its problem solving skills.
Its output style is right where I want it - dry and accurate enough when it's being asked for information, but has a lot character when it's answering fun prompts.
For input, it's flexible. It understands the Alpaca format well, but has been trained to handle a variety of input styles at this point.
I don't know what's next. I think I need to start benchmarking so I can more formally identify its weak points.
Examples edited with additional newlines to make it easier to read here.
**Example 1: Q&A**
```
Question: What is happening to the newspaper industry? Respond in a pirate accent.
Answer:
The newspaper industry is being plundered by scurvy dogs and buccaneers like ye!
They're pillagin' yer booty with their fancy digital devices, like smartphones and tablets,
which make it easier for them to read online content instead of buyin' printed copies from us.
And don't forget about those pesky social media sites that are spreading rumors faster than the
winds on a galleon! So it's all hands on deck as we fight back against these scoundrels, or else
we'll be walkin' the plank soon enough! Avast ye landlubbers!
```
**Example 2: Alpaca Instruction**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Angrily explain why pizza shouldn't be made with bad tomatoes.
### Response:
Listen, I know you think it's funny to make pizza with those rotten tomatoes, but let me tell you something -
bad tomatoes are not only disgusting and unhealthy, they also ruin the flavor of your pizza! You want a
delicious pie with fresh, juicy tomatoes that burst in your mouth, right?
Well, then, stop using those nasty tomatoes and stick to good quality ingredients for a truly satisfying pizza experience!
Trust me, once you try it, you won't go back! So please, don't even bother wasting your time making pizza with bad tomatoes
anymore; it's just not worth it. It makes me angry just thinking about it!
```
**Example 3: Chat / Roleplay**
```
You are speaking to Andre The Giant.
You: Hey, you're that wrestler guy, right?
Andre The Giant: Yes, I am indeed Andre the Giant. How can I help you?
You: So, how tall are you really? Like, what's your height in feet and inches?
Andre The Giant: Well, let's see... I stand at 7 ft 4 in (229 cm) tall.
You: Wow! That's so tall! And how much do you weigh?
Andre The Giant: I weigh around 500 lbs (227 kg). It's quite heavy, isn't it?
You: Yeah, I guess so. You must be strong too, considering your size.
Andre The Giant: Thank you for saying that. Strength is definitely important when dealing with my profession.
You: What made you decide to become a professional wrestler?
Andre The Giant: Well, I grew up watching wrestling on television, and I always admired the athleticism
and showmanship involved. When I got older, I decided to pursue it myself.
You: That's cool. Do you have any advice for aspiring wrestlers out there?
Andre The Giant: Absolutely! Always stay dedicated and focused, train hard, and never give up on yourself
or your dreams. Remember, anything is possible if you believe in yourself.
```
<!-- original-model-card end -->
|
TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF | TheBloke | 2023-09-27T12:54:01Z | 492 | 10 | transformers | [
"transformers",
"gguf",
"llama",
"not-for-all-audiences",
"nsfw",
"base_model:Undi95/MLewd-ReMM-L2-Chat-20b-inverted",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-22T23:47:28Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
model_name: MLewd ReMM L2 Chat 20b Inverted
base_model: Undi95/MLewd-ReMM-L2-Chat-20b-inverted
inference: false
model_creator: Undi95
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MLewd ReMM L2 Chat 20b Inverted - GGUF
- Model creator: [Undi95](https://huggingface.co/Undi95)
- Original model: [MLewd ReMM L2 Chat 20b Inverted](https://huggingface.co/Undi95/MLewd-ReMM-L2-Chat-20b-inverted)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Undi95's MLewd ReMM L2 Chat 20b Inverted](https://huggingface.co/Undi95/MLewd-ReMM-L2-Chat-20b-inverted).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF)
* [Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/MLewd-ReMM-L2-Chat-20b-inverted)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi95's MLewd ReMM L2 Chat 20b Inverted](https://huggingface.co/Undi95/MLewd-ReMM-L2-Chat-20b-inverted).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mlewd-remm-l2-chat-20b-inverted.Q2_K.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q2_K.gguf) | Q2_K | 2 | 8.31 GB| 10.81 GB | smallest, significant quality loss - not recommended for most purposes |
| [mlewd-remm-l2-chat-20b-inverted.Q3_K_S.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q3_K_S.gguf) | Q3_K_S | 3 | 8.66 GB| 11.16 GB | very small, high quality loss |
| [mlewd-remm-l2-chat-20b-inverted.Q3_K_M.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q3_K_M.gguf) | Q3_K_M | 3 | 9.70 GB| 12.20 GB | very small, high quality loss |
| [mlewd-remm-l2-chat-20b-inverted.Q3_K_L.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q3_K_L.gguf) | Q3_K_L | 3 | 10.63 GB| 13.13 GB | small, substantial quality loss |
| [mlewd-remm-l2-chat-20b-inverted.Q4_0.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q4_0.gguf) | Q4_0 | 4 | 11.29 GB| 13.79 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mlewd-remm-l2-chat-20b-inverted.Q4_K_S.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q4_K_S.gguf) | Q4_K_S | 4 | 11.34 GB| 13.84 GB | small, greater quality loss |
| [mlewd-remm-l2-chat-20b-inverted.Q4_K_M.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q4_K_M.gguf) | Q4_K_M | 4 | 12.04 GB| 14.54 GB | medium, balanced quality - recommended |
| [mlewd-remm-l2-chat-20b-inverted.Q5_0.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q5_0.gguf) | Q5_0 | 5 | 13.77 GB| 16.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mlewd-remm-l2-chat-20b-inverted.Q5_K_S.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q5_K_S.gguf) | Q5_K_S | 5 | 13.77 GB| 16.27 GB | large, low quality loss - recommended |
| [mlewd-remm-l2-chat-20b-inverted.Q5_K_M.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q5_K_M.gguf) | Q5_K_M | 5 | 14.16 GB| 16.66 GB | large, very low quality loss - recommended |
| [mlewd-remm-l2-chat-20b-inverted.Q6_K.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q6_K.gguf) | Q6_K | 6 | 16.40 GB| 18.90 GB | very large, extremely low quality loss |
| [mlewd-remm-l2-chat-20b-inverted.Q8_0.gguf](https://huggingface.co/TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF/blob/main/mlewd-remm-l2-chat-20b-inverted.Q8_0.gguf) | Q8_0 | 8 | 21.25 GB| 23.75 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF and below it, a specific filename to download, such as: mlewd-remm-l2-chat-20b-inverted.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF mlewd-remm-l2-chat-20b-inverted.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF mlewd-remm-l2-chat-20b-inverted.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mlewd-remm-l2-chat-20b-inverted.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MLewd-ReMM-L2-Chat-20B-Inverted-GGUF", model_file="mlewd-remm-l2-chat-20b-inverted.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Undi95's MLewd ReMM L2 Chat 20b Inverted
First :
```shell
layer_slices:
- model: Undi95/MLewd-L2-Chat-13B
start: 0
end: 16
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 8
end: 20
- model: Undi95/MLewd-L2-Chat-13B
start: 17
end: 32
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 21
end: 40
```
Inverted:
```shell
layer_slices:
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 0
end: 16
- model: Undi95/MLewd-L2-Chat-13B
start: 8
end: 20
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 17
end: 32
- model: Undi95/MLewd-L2-Chat-13B
start: 21
end: 40
```
Precise:
```shell
layer_slices:
- model: Undi95/MLewd-L2-Chat-13B
start: 0
end: 8
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 4
end: 12
- model: Undi95/MLewd-L2-Chat-13B
start: 9
end: 16
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 13
end: 22
- model: Undi95/MLewd-L2-Chat-13B
start: 17
end: 24
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 23
end: 32
- model: Undi95/MLewd-L2-Chat-13B
start: 25
end: 32
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 33
end: 40
```
PreciseInverted:
```shell
layer_slices:
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 0
end: 8
- model: Undi95/MLewd-L2-Chat-13B
start: 4
end: 12
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 9
end: 16
- model: Undi95/MLewd-L2-Chat-13B
start: 13
end: 22
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 17
end: 24
- model: Undi95/MLewd-L2-Chat-13B
start: 23
end: 32
- model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
start: 25
end: 32
- model: Undi95/MLewd-L2-Chat-13B
start: 33
end: 40
```
Part1 = ReMM v2.1 merged /w MLewd low weight to keep consistency. I call this "dilution" and result show consistency and coherency without repeat/loop beside the small amount of duplicated datas.
The goal is to find the best way to interlace layers the best way possible to have a sweetspot between 13B and +30B.
Normal/Inverted is by chunk of 16 layers and Precise/PreciseInverted is by chunk of 8 layers.
All the models are made of 64(+1) layers. Need testing.
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- original-model-card end -->
|
circulus/vits-age-gender-detect | circulus | 2023-10-22T12:11:07Z | 492 | 1 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-10-22T12:08:56Z | ---
license: gpl-3.0
---
|
mlabonne/NeuralBeagle14-7B-GGUF | mlabonne | 2024-01-28T00:33:37Z | 492 | 45 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"dpo",
"rlhf",
"base_model:mlabonne/NeuralBeagle14-7B",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-01-16T17:44:51Z | ---
license: cc-by-nc-4.0
base_model: mlabonne/NeuralBeagle14-7B
tags:
- merge
- mergekit
- lazymergekit
- dpo
- rlhf
---

# NeuralBeagle14-7B
**Update 01/16/24: NeuralBeagle14-7B is probably the best 7B model you can find. 🎉**
NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac).
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪
## 🔍 Applications
This model uses a context window of 8k. It is compatible with different templates, like chatml and Llama's chat template.
Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling.
## 🏆 Evaluation
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date.
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralBeagle14-7B**](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [📄](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | **60.25** | **46.06** | **76.77** | **70.32** | **47.86** |
| [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) [📄](https://gist.github.com/mlabonne/f5a5bf8c0827bbec2f05b97cc62d642c) | 59.4 | 44.38 | 76.53 | 69.44 | 47.25 |
| [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [📄](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 |
| [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) [📄](https://gist.github.com/mlabonne/9082c4e59f4d3f3543c5eda3f4807040) | 58.93 | 45.38 | 76.48 | 65.68 | 48.18 |
| [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) [📄](https://gist.github.com/mlabonne/b31572a4711c945a4827e7242cfc4b9d) | 58.4 | 44.59 | 76.17 | 65.94 | 46.9 |
| [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) [📄](https://gist.github.com/mlabonne/1afab87b543b0717ec08722cf086dcc3) | 53.71 | 44.17 | 73.72 | 52.53 | 44.4 |
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
You can find the complete benchmark on [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
It's also on top of the Open LLM Leaderboard:

Compared to Beagle14, there's no improvement in this benchmark. This might be due to an unlucky run, but I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point. Another preference dataset could improve it even further. Note that the Beagle models perform better than Turdus, which is purposely contaminated on Winogrande (very high score).
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralBeagle14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
<p align="center">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.