modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf | RichardErkhov | 2024-06-01T05:03:52Z | 441 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-31T23:52:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-SauerkrautLM-70b-Instruct - GGUF
- Model creator: https://huggingface.co/VAGOsolutions/
- Original model: https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-SauerkrautLM-70b-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q2_K.gguf) | Q2_K | 24.56GB |
| [Llama-3-SauerkrautLM-70b-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [Llama-3-SauerkrautLM-70b-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.IQ3_S.gguf) | IQ3_S | 2.44GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.24GB |
| [Llama-3-SauerkrautLM-70b-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.IQ3_M.gguf) | IQ3_M | 23.07GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q3_K.gguf) | Q3_K | 10.47GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q3_K_M.gguf) | Q3_K_M | 5.53GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q3_K_L.gguf) | Q3_K_L | 2.93GB |
| [Llama-3-SauerkrautLM-70b-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.68GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q4_0.gguf) | Q4_0 | 0.46GB |
| [Llama-3-SauerkrautLM-70b-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.01GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q4_K.gguf) | Q4_K | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q4_1.gguf) | Q4_1 | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q5_0.gguf) | Q5_0 | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q5_K.gguf) | Q5_K | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q5_1.gguf) | Q5_1 | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q6_K.gguf) | Q6_K | 0.0GB |
| [Llama-3-SauerkrautLM-70b-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_Llama-3-SauerkrautLM-70b-Instruct-gguf/blob/main/Llama-3-SauerkrautLM-70b-Instruct.Q8_0.gguf) | Q8_0 | 0.0GB |
Original model description:
---
language:
- de
- en
tags:
- dpo
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---

## VAGO solutions Llama-3-SauerkrautLM-70b-Instruct
Introducing **Llama-3-SauerkrautLM-70b-Instruct** – our Sauerkraut version of the powerful [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)!
The model **Llama-3-SauerkrautLM-70b-Instruct** is a **joint effort** between **VAGO Solutions** and **Hyperspace.ai.**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all Llama-3-SauerkrautLM-70b-Instruct](#all-Llama-3-SauerkrautLM-70b-Instruct)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-llama-3-70b-Instruct
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| Llama-3-SauerkrautLM-70b-Instruct | [Link](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-70b-Instruct-exl2) | [Link](https://huggingface.co/redponike/Llama-3-SauerkrautLM-70b-Instruct-GGUF) | [Link](https://huggingface.co/cortecs/Llama-3-SauerkrautLM-70b-Instruct-GPTQ) |
## Model Details
**SauerkrautLM-llama-3-70B-Instruct**
- **Model Type:** Llama-3-SauerkrautLM-70b-Instruct is a finetuned Model based on [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
- **Language(s):** German, English
- **License:** [meta-llama](https://llama.meta.com/llama3/license)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model with DPO Fine-Tuning for 1 epoch with 70k data.
**We improved the model's capabilities noticably by feeding it with curated German data.**
### Prompt Template:
**English:**
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**German:**
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Du bist ein freundlicher und hilfreicher deutscher KI-Assistent.<|eot_id|><|start_header_id|>user<|end_header_id|>
Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Evaluation
**Open LLM Leaderboard:**
evaluated with lm-evaluation-benchmark-harness 0.4.2
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **80.98** |
| ARC (25-shot) | 74.31 |
| HellaSwag (10-shot) | 87.56 |
| MMLU (5-shot) | 81.09 |
| TruthfulQA (0-shot) | 67.01 |
| Winogrande (5-shot) | 84.69 |
| GSM8K (5-shot) | 91.20 |
**MT-Bench English**
```
########## First turn ##########
score
model turn
Llama-3-SauerkrautLM-70b-Instruct 1 8.86875
########## Second turn ##########
score
model turn
Llama-3-SauerkrautLM-70b-Instruct 2 8.506329
########## Average ##########
score
model
Llama-3-SauerkrautLM-70b-Instruct 8.688679
```
**MT-Bench German**
```
########## First turn ##########
score
model turn
Llama-3-SauerkrautLM-70b-Instruct 1 8.725
########## Second turn ##########
score
model turn
Llama-3-SauerkrautLM-70b-Instruct 2 8.5
########## Average ##########
score
model
Llama-3-SauerkrautLM-70b-Instruct 8.6125
```
**German RAG LLM Evaluation**
corrected result after FIX: https://github.com/huggingface/lighteval/pull/171
```
| Task |Version|Metric|Value| |Stderr|
|------------------------------------------------------|------:|------|----:|---|-----:|
|all | |acc |0.980|± |0.0034|
|community:german_rag_eval:_average:0 | |acc |0.980|± |0.0034|
|community:german_rag_eval:choose_context_by_question:0| 0|acc |0.998|± |0.0014|
|community:german_rag_eval:choose_question_by_context:0| 0|acc |1.000|± |0.0000|
|community:german_rag_eval:context_question_match:0 | 0|acc |0.973|± |0.0051|
|community:german_rag_eval:question_answer_match:0 | 0|acc |0.949|± |0.0070|
```
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Meta](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) for providing such valuable model to the Open-Source community.
Many thanks to [redponike](https://huggingface.co/redponike) and [cortecs](https://huggingface.co/cortecs) for the quant. version
|
PranavsAppu/pet-dogs | PranavsAppu | 2024-06-03T12:35:58Z | 441 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-03T12:29:34Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Pet-Dogs Dreambooth model trained by PranavsAppu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 23032402
Sample pictures of this concept:
.jpg)
|
Felladrin/gguf-MicroLlama | Felladrin | 2024-06-07T07:23:55Z | 441 | 0 | null | [
"gguf",
"base_model:keeeeenw/MicroLlama",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-07T07:20:39Z | ---
license: apache-2.0
base_model: keeeeenw/MicroLlama
---
GGUF version of [keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama).
|
sbintuitions/sarashina2-13b | sbintuitions | 2024-06-28T01:22:16Z | 441 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ja",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-07T11:58:51Z | ---
license: mit
language:
- ja
- en
---
# Sarashina2-13B
This repository provides large language models trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
model = AutoModelForCausalLM.from_pretrained("sbintuitions/sarashina2-13b", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-13b")
# If you want to use slow tokenizer
# tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-13b", use_fast=False)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
set_seed(123)
text = generator(
"おはようございます、今日の天気は",
max_length=30,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=3,
)
for t in text:
print(t)
```
## Configuration
| Parameters | Vocab size | Training tokens | Architecture | Position type | Layers | Hidden dim | Attention heads |
| :-----: | :-----------: | :-------------: | :------------ | :-----------: | :----: | :--------: | :-------------: |
| [7B](https://huggingface.co/sbintuitions/sarashina2-7b) | 102400 | 2.1T | Llama2 | RoPE | 32 | 4096 | 32 |
| [13B](https://huggingface.co/sbintuitions/sarashina2-13b) | 102400 | 2.1T | Llama2 | RoPE | 40 | 5120 | 40 |
| 70B (TBA)| | | | | | |
## Training Corpus
For our Japanese training data, we used a Japanese portion of the [Common Crawl corpus](https://commoncrawl.org/), which is the largest Web corpus, as our training dataset.
To clean the training corpus, we used [CCNet](https://github.com/facebookresearch/cc_net) and [HojiChar](https://github.com/HojiChar/HojiChar).
After cleaning, our Japanese training data contains about 1T tokens.
For our English training data, we extracted English documents from [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) but we removed books3 corpus due to copyright infringement.
## Tokenization
We use a [sentencepiece](https://github.com/google/sentencepiece) tokenizer with a unigram language model and byte-fallback.
We do not apply pre-tokenization with Japanese tokenizer.
Thus, a user may directly feed raw sentences into the tokenizer.
## Ethical Considerations and Limitations
Sarashina2 has not been tuned to follow an instruction yet.
Therefore, sarashina2 might generate some meaningless sequences, some inaccurate instances or biased/objectionable outputs.
Before using sarashina2, we would like developers to tune models based on human preferences and safety considerations.
## License
[MIT License](https://huggingface.co/sbintuitions/sarashina2-7b/blob/main/LICENSE) |
mradermacher/Yousei-22B-i1-GGUF | mradermacher | 2024-06-15T08:43:03Z | 441 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Envoid/Yousei-22B",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-14T11:28:14Z | ---
base_model: Envoid/Yousei-22B
language:
- en
library_name: transformers
no_imatrix: 'GGML_ASSERT: ggml-quants.c:13358: besti1 >= 0 && besti2 >= 0 && best_shift
!= 0'
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Envoid/Yousei-22B
**No more quants will be added, as llama.cpp crashes when trying to do so***
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Yousei-22B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q2_K.gguf) | i1-Q2_K | 8.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q4_0.gguf) | i1-Q4_0 | 12.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yousei-22B-i1-GGUF/resolve/main/Yousei-22B.i1-Q6_K.gguf) | i1-Q6_K | 18.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
dyamagishi/human_place | dyamagishi | 2024-06-21T22:45:10Z | 441 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:cagliostrolab/animagine-xl-3.1",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-06-21T04:15:31Z | ---
base_model: cagliostrolab/animagine-xl-3.1
library_name: diffusers
license: openrail++
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-dyamagishi/human_place
These are controlnet weights trained on cagliostrolab/animagine-xl-3.1 with new type of conditioning.
You can find some example images below.
prompt: outdoors, scenery, cloud, multiple_girls, sky, day, tree, grass, architecture, 2girls, blue_sky, building, standing, skirt, long_hair, mountain, east_asian_architecture, from_behind, castle, facing_away, black_skirt, school_uniform, pagoda, waterfall, white_shirt, white_hair, shirt, cloudy_sky, bag

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
NikolayKozloff/tabula-8b-Q5_0-GGUF | NikolayKozloff | 2024-06-23T15:49:25Z | 441 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:jpgard/t4-full",
"base_model:mlfoundations/tabula-8b",
"license:llama3",
"region:us"
]
| null | 2024-06-23T15:49:01Z | ---
base_model: mlfoundations/tabula-8b
datasets:
- jpgard/t4-full
language:
- en
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/tabula-8b-Q5_0-GGUF
This model was converted to GGUF format from [`mlfoundations/tabula-8b`](https://huggingface.co/mlfoundations/tabula-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mlfoundations/tabula-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/tabula-8b-Q5_0-GGUF --hf-file tabula-8b-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/tabula-8b-Q5_0-GGUF --hf-file tabula-8b-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/tabula-8b-Q5_0-GGUF --hf-file tabula-8b-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/tabula-8b-Q5_0-GGUF --hf-file tabula-8b-q5_0.gguf -c 2048
```
|
snower/omost-phi-3-mini-128k-Q4_K_M-GGUF | snower | 2024-06-24T06:22:38Z | 441 | 0 | null | [
"gguf",
"pytorch",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:lllyasviel/omost-phi-3-mini-128k",
"region:us"
]
| null | 2024-06-24T06:22:22Z | ---
base_model: lllyasviel/omost-phi-3-mini-128k
tags:
- pytorch
- trl
- sft
- llama-cpp
- gguf-my-repo
inference: false
---
# snower/omost-phi-3-mini-128k-Q4_K_M-GGUF
This model was converted to GGUF format from [`lllyasviel/omost-phi-3-mini-128k`](https://huggingface.co/lllyasviel/omost-phi-3-mini-128k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lllyasviel/omost-phi-3-mini-128k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo snower/omost-phi-3-mini-128k-Q4_K_M-GGUF --hf-file omost-phi-3-mini-128k-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo snower/omost-phi-3-mini-128k-Q4_K_M-GGUF --hf-file omost-phi-3-mini-128k-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo snower/omost-phi-3-mini-128k-Q4_K_M-GGUF --hf-file omost-phi-3-mini-128k-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo snower/omost-phi-3-mini-128k-Q4_K_M-GGUF --hf-file omost-phi-3-mini-128k-q4_k_m.gguf -c 2048
```
|
kmpartner/bkcncl-ft-test2 | kmpartner | 2024-06-29T00:54:34Z | 441 | 0 | diffusers | [
"diffusers",
"safetensors",
"region:us"
]
| null | 2024-06-27T06:36:03Z | Entry not found |
RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf | RichardErkhov | 2024-06-29T13:16:12Z | 441 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-29T13:12:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Pythia-70m-Synonym-Sentence-Converter - GGUF
- Model creator: https://huggingface.co/Mxode/
- Original model: https://huggingface.co/Mxode/Pythia-70m-Synonym-Sentence-Converter/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Pythia-70m-Synonym-Sentence-Converter.Q2_K.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q2_K.gguf) | Q2_K | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.IQ3_XS.gguf) | IQ3_XS | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.IQ3_S.gguf) | IQ3_S | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q3_K_S.gguf) | Q3_K_S | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.IQ3_M.gguf) | IQ3_M | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q3_K.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q3_K.gguf) | Q3_K | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q3_K_M.gguf) | Q3_K_M | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q3_K_L.gguf) | Q3_K_L | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.IQ4_XS.gguf) | IQ4_XS | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q4_0.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q4_0.gguf) | Q4_0 | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.IQ4_NL.gguf) | IQ4_NL | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q4_K_S.gguf) | Q4_K_S | 0.04GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q4_K.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q4_K.gguf) | Q4_K | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q4_K_M.gguf) | Q4_K_M | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q4_1.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q4_1.gguf) | Q4_1 | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q5_0.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q5_0.gguf) | Q5_0 | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q5_K_S.gguf) | Q5_K_S | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q5_K.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q5_K.gguf) | Q5_K | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q5_K_M.gguf) | Q5_K_M | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q5_1.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q5_1.gguf) | Q5_1 | 0.05GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q6_K.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q6_K.gguf) | Q6_K | 0.06GB |
| [Pythia-70m-Synonym-Sentence-Converter.Q8_0.gguf](https://huggingface.co/RichardErkhov/Mxode_-_Pythia-70m-Synonym-Sentence-Converter-gguf/blob/main/Pythia-70m-Synonym-Sentence-Converter.Q8_0.gguf) | Q8_0 | 0.07GB |
Original model description:
---
license: apache-2.0
language:
- en
tags:
- tiny
- small
- synonym
- tool
- converter
---
## What's this?
A **tiny** model that can perform **paraphrasing** or **synonym substitution**.
The base model is [pythia-70m](https://huggingface.co/EleutherAI/pythia-70m). This model was fine-tuned with 10 epochs using [Q-Lora](https://github.com/artidoro/qlora) method on my own training set.
## How to use
### quick start
First import the model from hf:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model_name_or_path = 'Mxode/Pythia-70m-C-Language-KnowledgeExtract'
device = 'cuda'
model = GPTNeoXForCausalLM.from_pretrained(model_name_or_path).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
# prompt template
prompt = '<|prompt|>Convert the following passage into synonymous sentences.<|prompt|>\n'
# any text you wish to convert, preferably in complete single sentences.
content = 'The theories and methods of systems science are extensively employed in various domains, including biology, economics, and sociology.'
text = prompt + content
```
Then generate:
```python
inputs = tokenizer(text, return_tensors="pt").to(device)
input_ids = inputs.input_ids
tokens = model.generate(
**inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=100,
do_sample=True,
)
# strip the input
response = tokenizer.decode(tokens[0]).replace(text, "").strip('<|endoftext|>')
# I call it 'Synonymizer' :)
print(f'Synonymizer: {response}')
### output:
### The disciplines of systems science are extensively employed in various domains, including biology, economics, and sociology.
```
Or maybe we'll try some more impossibly trained news? Hmm, get some sports news from espn and try:
```python
### ...
content = 'As both teams exited the court for halftime, Baynes and Mayen were shoulder to shoulder.'
### ...
print(f'Synonymizer: {response}')
### output:
### As the team neets around the court to ease their shifts, Baynes and Middets were partnerly paryyneen.
### sometimes:
### Begantly mastitatively, Baynes and Mayen staged their team rested the Tywindes rested the Tywindes rested the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid
```
WELL, as you can see, this is after all only an **experimental tiny model** and with that in mind I can give it a 7.5 out of 10 for performance.
I didn't adjust the hyperparameters, could try [low temperature] + [a bit higher repetition_penalty], the performance might be better.
I'll follow up by training more data on a slightly larger model and hopefully supporting multiple languages. While we all know that bigger models have better generalization capabilities - but smaller models are really cool :)
|
liam168/trans-opus-mt-en-zh | liam168 | 2021-07-16T04:17:11Z | 440 | 26 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:05Z | ---
language:
- en
- zh
tags:
- translation
widget:
- text: "I like to study Data Science and Machine Learning."
---
# liam168/trans-opus-mt-en-zh
## Model description
* source group: English
* target group: Chinese
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
## How to use
```python
>>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline
>>> mode_name = 'liam168/trans-opus-mt-en-zh'
>>> model = AutoModelWithLMHead.from_pretrained(mode_name)
>>> tokenizer = AutoTokenizer.from_pretrained(mode_name)
>>> translation = pipeline("translation_en_to_zh", model=model, tokenizer=tokenizer)
>>> translation('I like to study Data Science and Machine Learning.', max_length=400)
[{'translation_text': '我喜欢学习数据科学和机器学习'}]
```
## Contact
[email protected]
|
nielsr/vit-base-patch16-224-in21k-finetuned-cifar10 | nielsr | 2023-09-12T06:22:07Z | 440 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- type: accuracy
value: 0.9881481481481481
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- Accuracy: 0.9881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2455 | 1.0 | 190 | 0.2227 | 0.9830 |
| 0.1363 | 2.0 | 380 | 0.1357 | 0.9881 |
| 0.0954 | 3.0 | 570 | 0.1194 | 0.9878 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ibm/smxm | ibm | 2022-10-12T14:21:51Z | 440 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-12T14:21:41Z | Entry not found |
google/mobilenet_v2_0.75_160 | google | 2023-05-16T16:38:37Z | 440 | 2 | transformers | [
"transformers",
"pytorch",
"mobilenet_v2",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1801.04381",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-11-10T16:05:04Z | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V2
MobileNet V2 model pre-trained on ImageNet-1k at resolution 160x160. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_0.75\_160**, where **0.75** is the depth multiplier and **160** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_0.75_160")
model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_0.75_160")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{mobilenetv22018,
title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen},
booktitle={CVPR},
year={2018}
}
```
|
timm/deit_base_patch16_384.fb_in1k | timm | 2024-02-10T23:37:21Z | 440 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-28T01:31:58Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for deit_base_patch16_384.fb_in1k
A DeiT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.9
- GMACs: 55.5
- Activations (M): 101.6
- Image size: 384 x 384
- **Papers:**
- Training data-efficient image transformers & distillation through attention: https://arxiv.org/abs/2012.12877
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('deit_base_patch16_384.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'deit_base_patch16_384.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{pmlr-v139-touvron21a,
title = {Training data-efficient image transformers & distillation through attention},
author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
pages = {10347--10357},
year = {2021},
volume = {139},
month = {July}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
porkorbeef/Llama-2-13b-15_170806-7 | porkorbeef | 2023-08-29T06:43:52Z | 440 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| feature-extraction | 2023-08-29T00:41:46Z | Entry not found |
mlinmg/SG-Raccoon-Yi-55B | mlinmg | 2023-11-29T10:52:43Z | 440 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-27T19:45:11Z | ---
language:
- en,
pipeline_tag: conversational
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/>
</p>
---
# SG Raccoon Yi 55B
The first 55B auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://huggingface.co/01-ai/Yi-34B) into one.
# Prompting Format
```
single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>
multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
```
# Merge process
The models used in the merge are [dolphin-2_2-yi-34b](https://huggingface.co/ehartford/dolphin-2_2-yi-34b) and [OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama).
The layer ranges used are as follows:
```yaml
- range 0, 16
OrionStar-Yi-34B-Chat
- range 8, 24
dolphin-2_2-yi-34b
- range 17, 32
OrionStar-Yi-34B-Chat
- range 25, 40
dolphin-2_2-yi-34b
- range 33, 48
OrionStar-Yi-34B-Chat
- range 41, 56
dolphin-2_2-yi-34b
- range 49, 64
OrionStar-Yi-34B-Chat
- range 57, 72
dolphin-2_2-yi-34b
- range 65, 80
OrionStar-Yi-34B-Chat
```
# Tips
Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default.
Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition.
# Benchmarks
Coming soon.
# Acknowledgements
- Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project
- [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
- Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options
- Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models
- This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)
|
gmihaila/bert-base-cased-sst2 | gmihaila | 2024-06-26T11:52:09Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-01-23T11:40:15Z | ---
license: apache-2.0
---
|
Aditya685/Upshot-NeuralHermes-2.5-Mistral-7B-slerp | Aditya685 | 2024-03-13T08:59:44Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"Aditya685/upshot-sih",
"conversational",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:Aditya685/upshot-sih",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-04T13:32:34Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralHermes-2.5-Mistral-7B
- Aditya685/upshot-sih
base_model:
- mlabonne/NeuralHermes-2.5-Mistral-7B
- Aditya685/upshot-sih
license: mit
---
# Upshot-NeuralHermes-2.5-Mistral-7B-slerp
Upshot-NeuralHermes-2.5-Mistral-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
* [Aditya685/upshot-sih](https://huggingface.co/Aditya685/upshot-sih)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
- model: Aditya685/upshot-sih
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Aditya685/Upshot-NeuralHermes-2.5-Mistral-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
thrunlab/Mistral_Sparse_pretraining_80_percent | thrunlab | 2024-02-15T02:03:12Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-02-04T14:06:10Z | ---
tags:
- trl
- sft
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Mistral_Sparse_pretraining_80_percent
results: []
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral_Sparse_pretraining_80_percent
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8189
- Accuracy: 0.6641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
vicgalle/solarized-18B-truthy | vicgalle | 2024-02-11T16:18:24Z | 440 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-02-11T15:59:29Z | ---
license: apache-2.0
datasets:
- jondurbin/truthy-dpo-v0.1
---
## Solarized-18B-truthy
Solarized-18B-dpo fine-tuned to improve truthfulness.
It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct. Then, we applied DPO over a high-quality preference dataset.
 |
Crystalcareai/Qwen1.5-8x7b | Crystalcareai | 2024-03-01T17:17:16Z | 440 | 46 | transformers | [
"transformers",
"pytorch",
"mixtral",
"text-generation",
"conversational",
"dataset:Crystalcareai/MoD",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-17T18:42:15Z | ---
license: other
license_name: tongyi-qianwen-license-agreement
license_link: >-
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
datasets:
- Crystalcareai/MoD
---
# Please note this is the model that accompanies the dataset; https://huggingface.co/datasets/Crystalcareai/MoD. The readme is the same for both, with more detail below
## Hey, I'm Lucas
I'm excited to share an early release of a project that has kept me busy for the last couple of weeks. Mixtral's release propelled me into a deep dive into MoEs.
With the release of Qwen1.5, I was curious to see how it would compare to Mixtral.
Coming from a background as an acting teacher and coach, I saw parallels between high-quality scripts' impact on performances and the importance of curating high-quality data for training models. This led me to explore data curation, especially for training Mixture of Experts (MoE) models. I looked into Teknium's OpenHermes dataset, Jon Durbin's collections on GitHub, and Eric Hartford's methods for achieving specific outcomes with models.
I curated a dataset, named Mixture of Data (MoD), from various sources, including Bagel, OpenHermes, and many more, totaling about 780,000 distinct ShareGPT conversations. This dataset aims to encourage MoE models to develop their own distinct experts.
After training Qwen1.5-7b on 100k random samples from MoD over four epochs and merging the fine-tuned model 8x, I used an approach utilizing a random gate, without specialized fine-tuning done to any of the 8 experts. The result was a model that initially made no sense, lacking a base model and clear guidance on expert usage.
Despite challenges, such as training interruptions via cuda errors with Runpod , the model showed promising adaptability to the rest of the MoD dataset, even with limited training (0.45/4 planned epochs were completed before my compute budget ran out). It performs comparably to Mixtral in (admittedly naive) preliminary reasoning tests.
These weeks have been incredibly rewarding and educational, thanks to the contributions of Jon Durbin, Maxime Labonne, Teknium, Eric Hartford, and Charles Goddard. Their work has made these technologies accessible and inspired my project. A special thank you to Teknium and Eric Hartford, who have been generous with their time - answering my questions with kindness and humility.
I am currently training a 2.0 model - that I expect to beat Mixtral on most benchmarks. Thank you for your interest and support. Let's push the boundaries of what's possible together.
Lucas
|
jarod0411/zinc10M_gpt2-medium_SMILES_step1 | jarod0411 | 2024-02-20T08:58:25Z | 440 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2-medium",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-19T09:06:52Z | ---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: zinc10M_gpt2-medium_SMILES_step1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zinc10M_gpt2-medium_SMILES_step1
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5598
- Accuracy: 0.8151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.635 | 1.0 | 10635 | 0.6169 | 0.8007 |
| 0.6073 | 2.0 | 21270 | 0.5937 | 0.8066 |
| 0.5932 | 3.0 | 31905 | 0.5828 | 0.8093 |
| 0.5843 | 4.0 | 42540 | 0.5754 | 0.8112 |
| 0.5782 | 5.0 | 53175 | 0.5704 | 0.8124 |
| 0.5729 | 6.0 | 63810 | 0.5666 | 0.8134 |
| 0.5691 | 7.0 | 74445 | 0.5638 | 0.8141 |
| 0.5666 | 8.0 | 85080 | 0.5620 | 0.8145 |
| 0.5644 | 9.0 | 95715 | 0.5606 | 0.8149 |
| 0.5629 | 10.0 | 106350 | 0.5598 | 0.8151 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LordNoah/spin_gpt2_medium_alpaca_e4 | LordNoah | 2024-02-19T09:49:17Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-19T09:43:36Z | ---
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AIGym/TinyGPT2-81M | AIGym | 2024-02-24T17:34:05Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-20T18:00:27Z | ---
license: apache-2.0
library_name: transformers
---
# Model Card for Model ID
This tiny model is an 81 million perimeter GPT2 based model it was trained from scratch on the 3060 TI. it uses the GPT2 tokenizer from the GPT2 repo here on hugging face.
We are training our own tokenizer from scratch and will release a version 2 of this trained on even more data sets once that is complete.
This model is in float 32 but will be converted shortly to float16 in bfloat16.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
Inference Code:
```
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load fine-tuned GPT-2 model and tokenizer
model = GPT2LMHeadModel.from_pretrained("AIGym/TinyGPT2-81M-colab") # or change the name to the checkpoint if you wanted to try them out
tokenizer = GPT2Tokenizer.from_pretrained("AIGym/TinyGPT2-81M-colab") # use the same as the one above unless you know what you are doing
# Example prompts
prompts = [
"Artificial intelligence is",
"The future of humanity depends on",
"In a galaxy far, far away, there lived",
"To be or not to be, that is",
"Once upon a time, there was a"
]
# Function to generate text based on a prompt
def generate_text(prompt, max_length=120, temperature=0.3):
input_ids = tokenizer.encode(prompt, return_tensors="pt")
attention_mask = torch.ones(input_ids.shape, dtype=torch.long)
output = model.generate(input_ids, attention_mask=attention_mask, max_length=max_length, temperature=temperature, num_return_sequences=1)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
# Generate and print completions for each prompt
for prompt in prompts:
completion = generate_text(prompt)
print("Prompt:", prompt)
print("Completion:", completion)
print()
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LoneStriker/OpenCodeInterpreter-DS-6.7B-GGUF | LoneStriker | 2024-02-24T15:07:38Z | 440 | 9 | null | [
"gguf",
"code",
"text-generation",
"en",
"arxiv:2402.14658",
"region:us"
]
| text-generation | 2024-02-24T14:56:01Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
---
<h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1>
<p align="center">
<img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png">
</p>
<p align="center">
<a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a>
|
<a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a>
</p>
<hr>
## Introduction
OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities.
For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv.
## Model Usage
### Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path="OpenCodeInterpreter-DS-6.7B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
prompt = "Write a function to find the shared elements from the given two lists."
inputs = tokenizer.apply_chat_template(
[{'role': 'user', 'content': prompt }],
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=1024,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
## Contact
If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected].
We're here to assist you!" |
liminerity/Smaug-slerp-30b-v0.1 | liminerity | 2024-03-01T04:01:12Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-01T03:45:26Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- abacusai/Smaug-72B-v0.1
- abacusai/Smaug-72B-v0.1
---
# Smaug-slerpB-v0.1
Smaug-slerpB-v0.1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: abacusai/Smaug-72B-v0.1
layer_range: [0, 32]
- model: abacusai/Smaug-72B-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: abacusai/Smaug-72B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
arcee-ai/Alpaca-Dragon-Smaug-Slerp | arcee-ai | 2024-03-08T20:14:46Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"ibivibiv/alpaca-dragon-72b-v1",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-08T17:23:41Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- ibivibiv/alpaca-dragon-72b-v1
- abacusai/Smaug-72B-v0.1
---
# Alpaca-Dragon-Smaug-Slerp
Alpaca-Dragon-Smaug-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [ibivibiv/alpaca-dragon-72b-v1](https://huggingface.co/ibivibiv/alpaca-dragon-72b-v1)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ibivibiv/alpaca-dragon-72b-v1
layer_range: [0, 80]
- model: abacusai/Smaug-72B-v0.1
layer_range: [0, 80]
merge_method: slerp
base_model: abacusai/Smaug-72B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Gweizheng/Marcoro14-7B-dare | Gweizheng | 2024-04-03T09:13:18Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"samir-fama/SamirGPT-v1",
"abacusai/Slerp-CM-mist-dpo",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-03T09:06:53Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
---
# Marcoro14-7B-dare
Marcoro14-7B-dare is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2)
## 🧩 Configuration
\```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
\``` |
yyh0901/lloma_step50 | yyh0901 | 2024-04-06T11:39:59Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-06T06:35:40Z | ---
license: apache-2.0
pipeline_tag: text-generation
---
"_name_or_path": "/data/yyh/model/models--meta-llama--Llama-2-7b-hf",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 65536,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 16.0,
"type": "dynamic"
},
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.39.2",
"use_cache": true,
"vocab_size": 32000
Enlonged context length, trained 50 steps on LLaMa-2-7b |
tetrisblack/MicroLlama-GGUF | tetrisblack | 2024-04-17T21:03:32Z | 440 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-04-09T22:46:05Z | Entry not found |
Monor/Llama3-ChatQA-1.5-8B-gguf | Monor | 2024-05-09T00:52:48Z | 440 | 1 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-07T12:03:59Z | ---
license: apache-2.0
---
## Introduce
Quantizing the [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
|
amara16/t5-qa-large-project | amara16 | 2024-05-10T01:08:58Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2024-05-10T01:05:39Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Orthocopter_8B-i1-GGUF | mradermacher | 2024-05-11T18:49:25Z | 440 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jeiku/Orthocopter_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-10T20:10:51Z | ---
base_model: jeiku/Orthocopter_8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/jeiku/Orthocopter_8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Orthocopter_8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF/resolve/main/Orthocopter_8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nathantablang/question-answering-qa-may-12-tablang-LOCAL | nathantablang | 2024-05-12T21:25:19Z | 440 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-05-12T21:20:03Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF | mradermacher | 2024-05-19T22:31:22Z | 440 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:cognitivecomputations/dolphin-2.9.1-yi-1.5-9b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-19T12:19:48Z | ---
base_model: cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q4_0.gguf) | i1-Q4_0 | 5.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.i1-Q6_K.gguf) | i1-Q6_K | 7.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF | LiteLLMs | 2024-05-24T14:57:00Z | 440 | 0 | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"GGUF",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-24T14:45:45Z |
---
language:
- en
license: apache-2.0
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
- GGUF
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
quantized_by: andrijdavid
---
# Hermes-2-Theta-Llama-3-8B-GGUF
- Original model: [Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Hermes-2-Theta-Llama-3-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Hermes-2-Theta-Llama-3-8B
# - Hermes-2 Θ Llama-3 8B

## Model Description
Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-|:|--||--:|--:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2441|± |0.0270|
|agieval_logiqa_en | 0|acc |0.3687|± |0.0189|
| | |acc_norm|0.3840|± |0.0191|
|agieval_lsat_ar | 0|acc |0.2304|± |0.0278|
| | |acc_norm|0.2174|± |0.0273|
|agieval_lsat_lr | 0|acc |0.5471|± |0.0221|
| | |acc_norm|0.5373|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.7670|± |0.0295|
| | |acc_norm|0.7379|± |0.0307|
|agieval_sat_en_without_passage| 0|acc |0.4417|± |0.0347|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_math | 0|acc |0.4000|± |0.0331|
| | |acc_norm|0.3455|± |0.0321|
```
Average: 44.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
||:|--:|--:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6000|± |0.0356|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6585|± |0.0247|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2340|± |0.0224|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|± |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2057|± |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5367|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4040|± |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.4970|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7075|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4821|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2295|± |0.0133|
|bigbench_snarks | 0|multiple_choice_grade|0.6906|± |0.0345|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5375|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.6270|± |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2216|± |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1594|± |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5367|± |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
<!-- original-model-card end -->
|
mistral-community/Mistral-7B-Instruct-v0.3 | mistral-community | 2024-07-01T08:52:29Z | 440 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-26T15:46:26Z | ---
license: apache-2.0
---
# Model Card for Mistral-7B-Instruct-v0.3
> [!WARNING]
> This model checkpoint is provided as-is and might not be up-to-date. Please use the corresponding version from https://huggingface.co/mistralai org
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Installation
It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```
### Instruct following
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall |
mradermacher/quill-72b-GGUF | mradermacher | 2024-06-01T18:35:17Z | 440 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:billyjoe/quill-72b",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T20:18:20Z | ---
base_model: billyjoe/quill-72b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/billyjoe/quill-72b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/quill-72b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.IQ3_XS.gguf) | IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.IQ3_S.gguf) | IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.IQ3_M.gguf) | IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
| [PART 1](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.SOURCE.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.SOURCE.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/quill-72b-GGUF/resolve/main/quill-72b.SOURCE.gguf.part3of3) | SOURCE | 145.5 | source gguf, only provided when it was hard to come by |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gglabs/TinyLM-Chat-0612-4dataset-10-epoch | gglabs | 2024-06-12T18:01:10Z | 440 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:gglabs/TinyLM-Chat-0611-11-epoch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-12T17:56:09Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: gglabs/TinyLM-Chat-0611-11-epoch
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** gglabs/TinyLM-Chat-0611-11-epoch
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CHE-72/Qwen1.5-4B-Chat-Q4_K_M-GGUF | CHE-72 | 2024-06-22T18:56:53Z | 440 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"license:other",
"region:us"
]
| text-generation | 2024-06-22T18:56:41Z | ---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-4b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-4b-chat-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-4b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-4b-chat-q4_k_m.gguf -c 2048
```
|
Narsil/gpt3 | Narsil | 2023-05-04T14:04:56Z | 439 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-05-04T14:04:55Z | ---
language: en
tags:
- exbert
license: mit
pipeline_tag: text-generation
duplicated_from: Narsil/gpt2
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\\'m a language model, not a language model"\
\
The concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
\t<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
timm/caformer_b36.sail_in1k | timm | 2023-05-05T05:35:33Z | 439 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-05-05T05:34:02Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for caformer_b36.sail_in1k
A CAFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 98.8
- GMACs: 23.2
- Activations (M): 67.3
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('caformer_b36.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_b36.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_b36.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
Marfuen98/photorealistic-1 | Marfuen98 | 2023-06-19T19:01:19Z | 439 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-01T20:21:14Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/43331?modelVersionId=94640
|
wtnan2003/vit-base-patch16-224-in21k-finetuned-lora-garbage_classification | wtnan2003 | 2023-08-08T08:55:37Z | 439 | 1 | peft | [
"peft",
"tensorboard",
"image-classification",
"region:us"
]
| image-classification | 2023-08-08T08:31:49Z | ---
library_name: peft
pipeline_tag: image-classification
---
# dataset source
https://www.kaggle.com/datasets/asdasdasasdas/garbage-classification?sort=votes
# inference example
```py
from peft import PeftConfig, PeftModel
from transformers import AutoModelForImageClassification, AutoImageProcessor
import torch
from PIL import Image
import requests
repo_name = f"wtnan2003/vit-base-patch16-224-in21k-finetuned-lora-garbage_classification"
label2id = {
"cardboard":0,
"glass":1,
"metal":2,
"paper":3,
"plastic":4,
"trash":5
}
id2label = {value:key for key, value in label2id.items()}
config = PeftConfig.from_pretrained(repo_name)
model = AutoModelForImageClassification.from_pretrained(
config.base_model_name_or_path,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True,
)
# Load the LoRA model
inference_model = PeftModel.from_pretrained(model, repo_name)
url = "https://www.uky.edu/facilities/sites/www.uky.edu.facilities/files/Cardboard%20Image.png"
# url = "https://th.bing.com/th/id/OIP.BkzhM2nwEy1edmV7WvU4EAHaJ4?pid=ImgDet&rs=1https://i.redd.it/01msg69otvl21.jpg" # glass
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained(repo_name)
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
with torch.no_grad():
outputs = inference_model(**encoding)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", inference_model.config.id2label[predicted_class_idx])
#Predicted class: cardboard
``` |
bardsai/finance-sentiment-fr-base | bardsai | 2023-09-18T09:54:48Z | 439 | 3 | transformers | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"fr",
"dataset:datasets/financial_phrasebank",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T09:53:50Z | ---
language: fr
tags:
- text-classification
- financial-sentiment-analysis
- sentiment-analysis
datasets:
- datasets/financial_phrasebank
metrics:
- f1
- accuracy
- precision
- recall
widget:
- text: "Le chiffre d'affaires net a augmenté de 30 % pour atteindre 36 millions d'euros."
example_title: "Example 1"
- text: "Coup d'envoi du vendredi fou. Liste des promotions en magasin."
example_title: "Example 2"
- text: "Les actions de CDPROJEKT ont enregistré la plus forte baisse parmi les entreprises cotées au WSE."
example_title: "Example 3"
---
# Finance Sentiment FR (base)
Finance Sentiment FR (base) is a model based on [camembert-base](https://huggingface.co/camembert-base) for analyzing sentiment of French financial news. It was trained on the translated version of [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (20014) for 10 epochs on single RTX3090 gpu.
The model will give you a three labels: positive, negative and neutral.
## How to use
You can use this model directly with a pipeline for sentiment-analysis:
```python
from transformers import pipeline
nlp = pipeline("sentiment-analysis", model="bardsai/finance-sentiment-fr-base")
nlp("Le chiffre d'affaires net a augmenté de 30 % pour atteindre 36 millions d'euros.")
```
```bash
[{'label': 'positive', 'score': 0.9987998807375955}]
```
## Performance
| Metric | Value |
| --- | ----------- |
| f1 macro | 0.963 |
| precision macro | 0.959 |
| recall macro | 0.967 |
| accuracy | 0.971 |
| samples per second | 140.8 |
(The performance was evaluated on RTX 3090 gpu)
## Changelog
- 2023-09-18: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: [bards.ai](https://bards.ai/)
Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected]
|
TheBloke/tulu-30B-GGUF | TheBloke | 2023-09-27T12:53:39Z | 439 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.07327",
"base_model:allenai/tulu-30b",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T02:43:18Z | ---
language:
- en
license: other
datasets:
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
- sahil2801/CodeAlpaca-20k
model_name: Tulu 30B
base_model: allenai/tulu-30b
inference: false
model_creator: Allen Institute for AI
model_type: llama
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tulu 30B - GGUF
- Model creator: [Allen Institute for AI](https://huggingface.co/allenai)
- Original model: [Tulu 30B](https://huggingface.co/allenai/tulu-30b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Allen AI's Tulu 30B](https://huggingface.co/allenai/tulu-30b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tulu-30B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tulu-30B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-30B-GGUF)
* [Allen Institute for AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-30B-fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Tulu
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tulu-30b.Q2_K.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes |
| [tulu-30b.Q3_K_S.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss |
| [tulu-30b.Q3_K_M.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss |
| [tulu-30b.Q3_K_L.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss |
| [tulu-30b.Q4_0.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tulu-30b.Q4_K_S.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss |
| [tulu-30b.Q4_K_M.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended |
| [tulu-30b.Q5_0.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tulu-30b.Q5_K_S.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended |
| [tulu-30b.Q5_K_M.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended |
| [tulu-30b.Q6_K.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss |
| [tulu-30b.Q8_0.gguf](https://huggingface.co/TheBloke/tulu-30B-GGUF/blob/main/tulu-30b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/tulu-30B-GGUF and below it, a specific filename to download, such as: tulu-30b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/tulu-30B-GGUF tulu-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/tulu-30B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tulu-30B-GGUF tulu-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tulu-30b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/tulu-30B-GGUF", model_file="tulu-30b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Allen AI's Tulu 30B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 30B fp16
These files are pytorch format fp16 model files for [Allen AI's Tulu 30B](https://huggingface.co/allenai/tulu-30b).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-30B-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-30B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-30B-fp16)
## Prompt template
The following template should be used:
```
<|user|>
prompt goes here
<|assistant|>
```
**Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly.
In other words, the prompt is:
```
<|user|>\nprompt goes here\n<|assistant|>\n
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Allen AI's Tulu 30B
# Tulu 30B
This model is a 30B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 57.7 | 58.4 | 6.0 | 51.0 | 45.8 | 48.7 | 58.2 | 12.3 | 25.4 | 46.0 | 63.5 | 44.7 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
<!-- original-model-card end -->
|
TheBloke/Mistralic-7B-1-GGUF | TheBloke | 2023-10-04T15:42:55Z | 439 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:SkunkworksAI/Mistralic-7B-1",
"text-generation-inference",
"region:us"
]
| null | 2023-10-04T13:33:20Z | ---
base_model: SkunkworksAI/Mistralic-7B-1
inference: false
model_creator: SkunkworksAI
model_name: Mistralic 7B-1
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### System: {system_message}
### Instruction: {prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistralic 7B-1 - GGUF
- Model creator: [SkunkworksAI](https://huggingface.co/SkunkworksAI)
- Original model: [Mistralic 7B-1](https://huggingface.co/SkunkworksAI/Mistralic-7B-1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [SkunkworksAI's Mistralic 7B-1](https://huggingface.co/SkunkworksAI/Mistralic-7B-1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistralic-7B-1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF)
* [SkunkworksAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SkunkworksAI/Mistralic-7B-1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistralic
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### System: {system_message}
### Instruction: {prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistralic-7b-1.Q2_K.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistralic-7b-1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [mistralic-7b-1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistralic-7b-1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistralic-7b-1.Q4_0.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistralic-7b-1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistralic-7b-1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistralic-7b-1.Q5_0.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistralic-7b-1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistralic-7b-1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistralic-7b-1.Q6_K.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistralic-7b-1.Q8_0.gguf](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF/blob/main/mistralic-7b-1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mistralic-7B-1-GGUF and below it, a specific filename to download, such as: mistralic-7b-1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mistralic-7B-1-GGUF mistralic-7b-1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Mistralic-7B-1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistralic-7B-1-GGUF mistralic-7b-1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mistralic-7b-1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### System: {system_message}\n\n### Instruction: {prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistralic-7B-1-GGUF", model_file="mistralic-7b-1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: SkunkworksAI's Mistralic 7B-1
<p><h1> 🦾 Mistralic-7B-1 🦾 </h1></p>
Special thanks to Together Compute for sponsoring Skunkworks with compute!
**INFERENCE**
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device('cuda')
system_prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
system_no_input_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
def generate_prompt(instruction, input=None):
if input:
prompt = f"### System:\n{system_prompt}\n\n"
else:
prompt = f"### System:\n{system_no_input_prompt}\n\n"
prompt += f"### Instruction:\n{instruction}\n\n"
if input:
prompt += f"### Input:\n{input}\n\n"
return prompt + """### Response:\n"""
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/Mistralic-7B-1")
tokenizer = AutoTokenizer.from_pretrained("SkunkworksAI/Mistralic-7B-1")
while True:
instruction = input("Enter Instruction: ")
instruction = generate_prompt(instruction)
inputs = tokenizer(instruction, return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=1000, do_sample=True, temperature=0.01, use_cache=True, eos_token_id=tokenizer.eos_token_id)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
**EVALUATION**

Average: 0.72157
For comparison:
mistralai/Mistral-7B-v0.1 scores 0.7116
mistralai/Mistral-7B-Instruct-v0.1 scores 0.6794
<!-- original-model-card end -->
|
TheBloke/ALMA-7B-GGUF | TheBloke | 2023-10-14T18:26:31Z | 439 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2309.11674",
"base_model:haoranxu/ALMA-7B",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-10-14T18:18:16Z | ---
base_model: haoranxu/ALMA-7B
inference: false
license: mit
model_creator: Haoran Xu
model_name: ALMA 7B
model_type: llama
prompt_template: 'Translate this from Chinese to English:
Chinese: {prompt}
English:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ALMA 7B - GGUF
- Model creator: [Haoran Xu](https://huggingface.co/haoranxu)
- Original model: [ALMA 7B](https://huggingface.co/haoranxu/ALMA-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Haoran Xu's ALMA 7B](https://huggingface.co/haoranxu/ALMA-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ALMA-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ALMA-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ALMA-7B-GGUF)
* [Haoran Xu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/haoranxu/ALMA-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ALMA
```
Translate this from Chinese to English:
Chinese: {prompt}
English:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Haoran Xu's ALMA 7B](https://huggingface.co/haoranxu/ALMA-7B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [alma-7b.Q2_K.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [alma-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [alma-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [alma-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [alma-7b.Q4_0.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [alma-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [alma-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [alma-7b.Q5_0.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [alma-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [alma-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [alma-7b.Q6_K.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [alma-7b.Q8_0.gguf](https://huggingface.co/TheBloke/ALMA-7B-GGUF/blob/main/alma-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/ALMA-7B-GGUF and below it, a specific filename to download, such as: alma-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/ALMA-7B-GGUF alma-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/ALMA-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ALMA-7B-GGUF alma-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m alma-7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Translate this from Chinese to English:\nChinese: {prompt}\nEnglish:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/ALMA-7B-GGUF", model_file="alma-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Haoran Xu's ALMA 7B
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release four translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models for translation purposes.**
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
<!-- original-model-card end -->
|
jihyoung/rebot-generation | jihyoung | 2023-12-21T05:19:09Z | 439 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"conversational",
"en",
"dataset:jihyoung/ConversationChronicles",
"arxiv:2310.13420",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-22T15:26:39Z | ---
license: cc-by-4.0
datasets:
- jihyoung/ConversationChronicles
language:
- en
pipeline_tag: conversational
---
# 👫ReBot - Generation Module⏰
ReBot is a novel multi-session dialgoue model which can generate dialogue with chronological dynamics! ReBot consists two modules: (1) chronological summarization module; (2) dialogue generation module.
**This repoistory for dialogue generation module.** You can check summarization module on [this repoistory](https://huggingface.co/jihyoung/rebot-summarization).
🚨 Please be cautious when testing our model with the Hosted Inference API. Our model takes sequences as input, so you should provide sequences as input through the API as well.
## Model description
+ Paper: [Conversation Chronicles: Towards Diverse Temporal and Relational Dynamics in Multi-Session Conversations](https://arxiv.org/abs/2310.13420)
+ Dataset : [Conversation Chronicles](https://huggingface.co/datasets/jihyoung/ConversationChronicles)
+ Generation Module of Model : this repoistory
+ Summarization Module of Model : [chronological summarization module](https://huggingface.co/jihyoung/rebot-summarization)
## Load with Transformers
To load our dataset with Hugging Face Transformers, please use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("jihyoung/rebot-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("jihyoung/rebot-generation")
```
## Citation Information
```
@inproceedings{jang-etal-2023-conversation,
title = "Conversation Chronicles: Towards Diverse Temporal and Relational Dynamics in Multi-Session Conversations",
author = "Jang, Jihyoung and
Boo, Minseong and
Kim, Hyounghun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.838",
doi = "10.18653/v1/2023.emnlp-main.838",
pages = "13584--13606",
}
```
|
Yntec/OG | Yntec | 2024-05-08T05:31:06Z | 439 | 4 | diffusers | [
"diffusers",
"safetensors",
"General",
"Eldreths",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-06T03:04:05Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Eldreths
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Elldreth's OG 4060 Mix
Safetensors version of this model with the MoistMixV2 VAE baked in.
Sample and prompt:

fine details portrait of joyful cute girl, aliens vivid, nature trees, meadows at night, bokeh, close-up, anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of kyoani
Original page: https://huggingface.co/danbrown/elldreth-og-mix |
TheBloke/Yi-6B-GGUF | TheBloke | 2023-11-10T15:57:41Z | 439 | 13 | transformers | [
"transformers",
"gguf",
"yi",
"text-generation",
"base_model:01-ai/Yi-6B",
"license:other",
"region:us"
]
| text-generation | 2023-11-10T11:33:47Z | ---
base_model: 01-ai/Yi-6B
inference: false
license: other
license_link: LICENSE
license_name: yi-license
model_creator: 01-ai
model_name: Yi 6B
model_type: yi
pipeline_tag: text-generation
prompt_template: '{prompt}'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi 6B - GGUF
- Model creator: [01-ai](https://huggingface.co/01-ai)
- Original model: [Yi 6B](https://huggingface.co/01-ai/Yi-6B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [01-ai's Yi 6B](https://huggingface.co/01-ai/Yi-6B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-6B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-6B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-6B-GGUF)
* [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-6B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [yi-6b.Q2_K.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q2_K.gguf) | Q2_K | 2 | 2.62 GB| 5.12 GB | smallest, significant quality loss - not recommended for most purposes |
| [yi-6b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.71 GB| 5.21 GB | very small, high quality loss |
| [yi-6b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q3_K_M.gguf) | Q3_K_M | 3 | 2.99 GB| 5.49 GB | very small, high quality loss |
| [yi-6b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.24 GB| 5.74 GB | small, substantial quality loss |
| [yi-6b.Q4_0.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q4_0.gguf) | Q4_0 | 4 | 3.48 GB| 5.98 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [yi-6b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.50 GB| 6.00 GB | small, greater quality loss |
| [yi-6b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q4_K_M.gguf) | Q4_K_M | 4 | 3.67 GB| 6.17 GB | medium, balanced quality - recommended |
| [yi-6b.Q5_0.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q5_0.gguf) | Q5_0 | 5 | 4.20 GB| 6.70 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [yi-6b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.20 GB| 6.70 GB | large, low quality loss - recommended |
| [yi-6b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.30 GB| 6.80 GB | large, very low quality loss - recommended |
| [yi-6b.Q6_K.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q6_K.gguf) | Q6_K | 6 | 4.97 GB| 7.47 GB | very large, extremely low quality loss |
| [yi-6b.Q8_0.gguf](https://huggingface.co/TheBloke/Yi-6B-GGUF/blob/main/yi-6b.Q8_0.gguf) | Q8_0 | 8 | 6.44 GB| 8.94 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Yi-6B-GGUF and below it, a specific filename to download, such as: yi-6b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Yi-6B-GGUF yi-6b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Yi-6B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-6B-GGUF yi-6b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m yi-6b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yi-6B-GGUF", model_file="yi-6b.Q4_K_M.gguf", model_type="yi", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: 01-ai's Yi 6B
<div align="center">
<img src="./Yi.svg" width="200px">
</div>
## Introduction
The **Yi** series models are large language models trained from scratch by
developers at [01.AI](https://01.ai/). The first public release contains two
bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B))
and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained
with 4K sequence length and can be extended to 32K during inference time.
The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
200K context length.
## News
- 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
- 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and
[`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).
## Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
While benchmarking open-source models, we have observed a disparity between the
results generated by our pipeline and those reported in public sources (e.g.
OpenCompass). Upon conducting a more in-depth investigation of this difference,
we have discovered that various models may employ different prompts,
post-processing strategies, and sampling techniques, potentially resulting in
significant variations in the outcomes. Our prompt and post-processing strategy
remains consistent with the original benchmark, and greedy decoding is employed
during evaluation without any post-processing for the generated content. For
scores that were not reported by the original authors (including scores reported
with different settings), we try to get results with our pipeline.
To evaluate the model's capability extensively, we adopted the methodology
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
using a 7-shot setup, while all other tests were conducted with a 0-shot
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
is derived by averaging the scores on the remaining tasks. Since the scores for
these two tasks are generally lower than the average, we believe that
Falcon-180B's performance was not underestimated.
## Usage
Please visit our [github repository](https://github.com/01-ai/Yi) for general
guidance on how to use this model.
## Disclaimer
Although we use data compliance checking algorithms during the training process
to ensure the compliance of the trained model to the best of our ability, due to
the complexity of the data and the diversity of language model usage scenarios,
we cannot guarantee that the model will generate correct and reasonable output
in all scenarios. Please be aware that there is still a risk of the model
producing problematic outputs. We will not be responsible for any risks and
issues resulting from misuse, misguidance, illegal usage, and related
misinformation, as well as any associated data security concerns.
## License
The Yi series models are fully open for academic research and free commercial
usage with permission via applications. All usage must adhere to the [Model
License Agreement 2.0](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE). To
apply for the official commercial license, please contact us
([[email protected]](mailto:[email protected])).
<!-- original-model-card end -->
|
Felladrin/Smol-Llama-101M-Chat-v1 | Felladrin | 2024-03-03T13:38:35Z | 439 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:VMware/open-instruct",
"dataset:LDJnr/Capybara",
"dataset:cognitivecomputations/ultrachat-uncensored",
"dataset:starfishmedical/webGPT_x_dolly",
"dataset:THUDM/webglm-qa",
"base_model:BEE-spoke-data/smol_llama-101M-GQA",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-28T19:34:43Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation
base_model: BEE-spoke-data/smol_llama-101M-GQA
datasets:
- Open-Orca/SlimOrca-Dedup
- VMware/open-instruct
- LDJnr/Capybara
- cognitivecomputations/ultrachat-uncensored
- starfishmedical/webGPT_x_dolly
- THUDM/webglm-qa
widget:
- messages:
- role: system
content: You are a helpful assistant who gives creative responses.
- role: user
content: Write the background story of a game about wizards and llamas in a sci-fi world.
- messages:
- role: system
content: A friendly chat between a user and an assistant.
- role: user
content: Got a question for you!
- role: assistant
content: "Sure! What's it?"
- role: user
content: I need to build a simple website. Where should I start learning about web development?
- messages:
- role: system
content: "You are a helpful assistant who provides concise answers to the user's questions."
- role: user
content: How to become more healthy?
- messages:
- role: system
content: You are a helpful assistant, who always answers with empathy.
- role: user
content: List the pros and cons of social media.
- messages:
- role: system
content: You are a helpful assistant, who always answers with empathy.
- role: user
content: Hello!
- role: assistant
content: Hi! How can I help you today?
- role: user
content: 'Take a look at the info below.
- The tape inside the VHS cassettes is very delicate and can be easily ruined,
making them unplayable and unrepairable. The reason the tape deteriorates is that
the magnetic charge needed for them to work is not permanent, and the magnetic
particles end up losing their charge in a process known as remanence decay. These
particles could also become demagnetised via being stored too close to a magnetic
source.
- One of the most significant issues with VHS tapes is that they have moving parts,
meaning that there are more occasions when something can go wrong, damaging your
footage or preventing it from playing back. The tape itself is a prominent cause
of this, and tape slippage can occur. Tapes slippage can be caused when the tape
loses its tension, or it has become warped. These problems can occur in storage
due to high temperatures or frequent changes in humidity.
- VHS tapes deteriorate over time from infrequent or overuse. Neglect means mold
and dirt, while overuse can lead to scratches and technical difficulties. This
is why old VHS tapes inevitably experience malfunctions after a long period of
time. Usually anywhere between 10 to 25+ years.
- Some VHS tapes like newer mini DVs and Digital 8 tapes can suffer from digital
corruption, meaning that the footage becomes lost and cannot be recovered. These
tapes were the steppingstone from VHS to the digital age when capturing footage
straight to digital became the norm. Unfortunately,they are susceptible to digital
corruption, which causes video pixilation and/or loss of audio.'
- role: assistant
content: Alright!
- role: user
content: 'Now I''m going to write my question, and if the info above is useful, you can use them in your response.
Ready?'
- role: assistant
content: Ready for your question!
- role: user
content: Why do VHS tapes deteriorate over time?
inference:
parameters:
max_new_tokens: 250
penalty_alpha: 0.5
top_k: 4
repetition_penalty: 1.105
model-index:
- name: Smol-Llama-101M-Chat-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 22.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Smol-Llama-101M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 28.69
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Smol-Llama-101M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Smol-Llama-101M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Smol-Llama-101M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Smol-Llama-101M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Smol-Llama-101M-Chat-v1
name: Open LLM Leaderboard
---
# A Llama Chat Model of 101M Parameters
- Base model: [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA)
- Datasets:
- [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- [VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct)
- [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- [cognitivecomputations/ultrachat-uncensored](https://huggingface.co/datasets/cognitivecomputations/ultrachat-uncensored)
- [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly)
- [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa)
- Availability in other ML formats:
- GGUF: [afrideva/Smol-Llama-101M-Chat-v1-GGUF](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF)
- ONNX: [Felladrin/onnx-Smol-Llama-101M-Chat-v1](https://huggingface.co/Felladrin/onnx-Smol-Llama-101M-Chat-v1)
- MLC: [Felladrin/mlc-q4f16-Smol-Llama-101M-Chat-v1](https://huggingface.co/Felladrin/mlc-q4f16-Smol-Llama-101M-Chat-v1)
## Recommended Prompt Format
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
```
## Recommended Inference Parameters
```yml
penalty_alpha: 0.5
top_k: 4
repetition_penalty: 1.105
```
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Smol-Llama-101M-Chat-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.73|
|AI2 Reasoning Challenge (25-Shot)|22.87|
|HellaSwag (10-Shot) |28.69|
|MMLU (5-Shot) |24.93|
|TruthfulQA (0-shot) |45.76|
|Winogrande (5-shot) |50.04|
|GSM8k (5-shot) | 0.08|
|
mesolitica/mallam-1.1b-20k-instructions-rag | mesolitica | 2024-01-17T03:09:07Z | 439 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-15T14:41:37Z | Entry not found |
Manolo26/metis-chat-7b | Manolo26 | 2024-01-24T22:03:36Z | 439 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"conversational",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-24T21:32:55Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
license: apache-2.0
---
# metis-chat-7b
metis-chat-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Manolo26/metis-chat-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Technoculture/BioMistral-Hermes-Slerp | Technoculture | 2024-02-21T20:10:14Z | 439 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"BioMistral/BioMistral-7B-DARE",
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-21T20:05:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- BioMistral/BioMistral-7B-DARE
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
---
# BioMistral-Hermes-Slerp
BioMistral-Hermes-Slerp is a merge of the following models:
* [BioMistral/BioMistral-7B-DARE](https://huggingface.co/BioMistral/BioMistral-7B-DARE)
* [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)
## Evaluations
| Benchmark | BioMistral-Hermes-Slerp | Orca-2-7b | llama-2-7b | meditron-7b | meditron-70b |
| --- | --- | --- | --- | --- | --- |
| MedMCQA | | | | | |
| ClosedPubMedQA | | | | | |
| PubMedQA | | | | | |
| MedQA | | | | | |
| MedQA4 | | | | | |
| MedicationQA | | | | | |
| MMLU Medical | | | | | |
| MMLU | | | | | |
| TruthfulQA | | | | | |
| GSM8K | | | | | |
| ARC | | | | | |
| HellaSwag | | | | | |
| Winogrande | | | | | |
More details on the Open LLM Leaderboard evaluation results can be found here.
## 🧩 Configuration
```yaml
slices:
- sources:
- model: BioMistral/BioMistral-7B-DARE
layer_range: [0, 32]
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/BioMistral-Hermes-Slerp"
messages = [{"role": "user", "content": "I am feeling sleepy these days"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ccrains/larson-72b-dpo-1.0.6 | ccrains | 2024-03-05T07:12:31Z | 439 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-27T12:29:40Z | ---
license: apache-2.0
---
language: en
Prompt Template
We have fine-tuned this model using the llama2 format, and you can achieve optimal performance by utilizing the llama2 format.
Note the license points back to qwen-72 base license as it is a fine tune of their model only. Respect and abide by their conditions. Again, many thanks to Abacus for making their work open and use that as inspiration to keep your work open and respect their license agreements. License Link
|
ChrisWilson011016/5FWyTkrV6NCPMACmg6NNisWvjj1yoAF13kNaLi5pbRjjRTaL_vgg | ChrisWilson011016 | 2024-03-04T19:00:48Z | 439 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-29T12:58:40Z | Entry not found |
aboros98/lilo2_dpo_005 | aboros98 | 2024-03-06T17:15:36Z | 439 | 0 | transformers | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-06T16:47:15Z | ---
license: other
---
| Metric | Value |
|-----------------------|---------------------------|
| Average | - |
| ARC | 39.93 |
| ARC Easy | 63.05 |
| BoolQ | 77.40 |
| HellaSwag | 65.90 |
| OpenBookQA | 32.20 |
| PiQA | 75.95 |
| Winogrande | 57.38 |
|-----------------------|---------------------------|
| MMLU | 43.62 |
| GSM8K | |
| Truthful QA | 47.68 |
| MT-Bench | 5.96 | |
saucam/Phind-Codefuse-34B | saucam | 2024-03-12T09:30:06Z | 439 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Phind/Phind-CodeLlama-34B-v2",
"codefuse-ai/CodeFuse-CodeLlama-34B",
"base_model:Phind/Phind-CodeLlama-34B-v2",
"base_model:codefuse-ai/CodeFuse-CodeLlama-34B",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-12T07:16:25Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Phind/Phind-CodeLlama-34B-v2
- codefuse-ai/CodeFuse-CodeLlama-34B
base_model:
- Phind/Phind-CodeLlama-34B-v2
- codefuse-ai/CodeFuse-CodeLlama-34B
license: mit
---
# Phind-Codefuse-34B
Phind-Codefuse-34B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Phind/Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)
* [codefuse-ai/CodeFuse-CodeLlama-34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B)
## 🧩 Configuration
```yaml
models:
- model: Phind/Phind-CodeLlama-34B-v2
parameters:
density: 0.5
weight: 0.6
# No parameters necessary for base model
- model: codefuse-ai/CodeFuse-CodeLlama-34B
parameters:
density: 0.5
weight: 0.4
merge_method: task_arithmetic
base_model: Phind/Phind-CodeLlama-34B-v2
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Phind-Codefuse-34B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF | mradermacher | 2024-05-06T06:09:30Z | 439 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Undi95/Mixtral-8x7B-MoE-RP-Story",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-18T18:55:32Z | ---
base_model: Undi95/Mixtral-8x7B-MoE-RP-Story
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
weighted/imatrix quants of https://huggingface.co/Undi95/Mixtral-8x7B-MoE-RP-Story
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ1_S.gguf) | i1-IQ1_S | 10.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-MoE-RP-Story-i1-GGUF/resolve/main/Mixtral-8x7B-MoE-RP-Story.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
soulhq-ai/phi-2-insurance_qa-sft-lora | soulhq-ai | 2024-03-21T02:10:07Z | 439 | 3 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"nlp",
"insurance",
"lora",
"llm",
"phi-2",
"qa",
"sft",
"custom_code",
"en",
"dataset:soulhq-ai/insuranceQA-v2",
"arxiv:2106.09685",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-20T10:37:18Z | ---
license: mit
license_link: >-
https://huggingface.co/soulhq-ai/phi-2-insurance_qa-sft-lora/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- insurance
- lora
- llm
- phi-2
- transformers
- qa
- sft
datasets:
- soulhq-ai/insuranceQA-v2
widget:
- text: "### Instruction: What is the difference between health and life insurance?\n#### Response: "
- text: "### Instruction: Does Homeowners Insurance Cover Death Of Owner?\n#### Response: "
---
## Model Summary
This model builds on the architecture of <a href="https://huggingface.com/microsoft/phi-2">Microsoft's Phi-2</a>, incorporating the LoRA [[1]](#1) paradigm for supervised fine-tuning on a high quality question answering dataset in the insurance domain.
Thus, `soulhq-ai/phi-2-insurance_qa-sft-lora` serves as a text generation model capable of answering questions around insurance.
## Dataset
We utilise the InsuranceQA dataset [[2]](#2), which comprises 27.96K QA pairs related to the insurance domain.
The content of this dataset consists of questions from real world users, the answers with high quality were composed by insurance professionals with deep domain knowledge.
Since the dataset isn't available in a readable format on the web, we make it available on huggingface in a `jsonl` format, at <a href="https://huggingface.com/datasets/soulhq-ai/insuranceQA-v2">soulhq-ai/insuranceQA-v2</a>.
## Usage
Phi-2 compatibility has been integrated in the development version (4.37.0.dev) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local transformers to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current transformers version can be verified with: `pip list | grep transformers`.
### Input Format
```
### Instruction: <Question>
### Response:
```
For instance:
```
### Instruction: What does Basic Homeowners Insurance Cover?
### Response:
```
### Inference Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("soulhq-ai/phi-2-insurance_qa-sft-lora", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("soulhq-ai/phi-2-insurance_qa-sft-lora", trust_remote_code=True)
inputs = tokenizer('''### Instruction: What Does Basic Homeowners Insurance Cover?\n### Response: ''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=1024)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Training
### Model
* Architecture: Phi-2, with LoRA modifications for efficient Insurance domain-specific fine-tuning.
* Context length: 2048 tokens
* Modifications: Added `<|eostoken|>` for end-of-response learning - to help the model learn the end of responses, facilitating its use in dialogue systems.
### Configuration
* Hyperparameters:
* learning_rate=2e-5,
* batch_size=8,
* epochs=10,
* lora_r=32,
* lora_alpha=64.
* Infrastructure: Trained on an NVIDIA A40 and utilized the `FullyShardedDataParallelPlugin` for CPU offloading.
## Evaluation
Coming Soon!
## Limitations of `soulhq-ai/phi-2-insurance_qa-sft-lora`
* Generate Inaccurate Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Unreliable Responses to Instruction: It may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## License
The model is licensed under the [MIT license](https://huggingface.co/soulhq-ai/phi-2-insurance_qa-sft-lora/blob/main/LICENSE).
## Citations
[1] <a id="1" href="https://arxiv.org/abs/2106.09685">Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).</a></br>
[2] <a id="2" href="https://ieeexplore.ieee.org/abstract/document/7404872/">Feng, Minwei, et al. "Applying deep learning to answer selection: A study and an open task." 2015 IEEE workshop on automatic speech recognition and understanding (ASRU). IEEE, 2015.</a>
|
mradermacher/Alpacino13b-GGUF | mradermacher | 2024-05-06T05:17:07Z | 439 | 0 | transformers | [
"transformers",
"gguf",
"alpaca",
"en",
"base_model:digitous/Alpacino13b",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-04T20:54:42Z | ---
base_model: digitous/Alpacino13b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- alpaca
---
## About
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/digitous/Alpacino13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Alpacino13b-GGUF/resolve/main/Alpacino13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tanganke/clip-vit-base-patch32_gtsrb | tanganke | 2024-04-28T17:54:18Z | 439 | 0 | transformers | [
"transformers",
"safetensors",
"clip_vision_model",
"feature-extraction",
"dataset:tanganke/gtsrb",
"base_model:openai/clip-vit-base-patch32",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-04-28T17:52:07Z | ---
base_model:
- openai/clip-vit-base-patch32
datasets:
- tanganke/gtsrb
metrics:
- accuracy
---
# Model Card
## Model Details
- Architecture: ViT-Base with patch size 32
- Training Data: GTSRB dataset
## Training Details
Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32).
Only the vision encoder is fine-tuned.
## Evaluation Results
- pre-trained: 0.3042911887168884
- fine-tuned: 0.9881029725074768
## Usage
load vision model
```python
from transformers import CLIPVisionModel
vision_model = CLIPVisionModel.from_pretrained('tanganke/clip-vit-base-patch32_gtsrb')
```
substitute the vision encoder of clip
```python
from transformers import CLIPModel
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
clip_model.vision_model.load_state_dict(vision_model.vision_model.state_dict())
```
|
aixsatoshi/Llama-3-youko-8b-instruct-chatvector | aixsatoshi | 2024-06-22T14:26:57Z | 439 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-01T12:00:50Z | ---
license: llama3
---
youko-8bは、追加の日本語継続事前学習により日本語が大変流暢なLlama-3です。
Instructionモデルとの差分ベクトルマージを行いました。
> rinna/llama-3-youko-8b + 0.8*(meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B)
詳細は[rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b)をご確認ください。 |
QuantFactory/Llama-3-Alpha-Centauri-v0.1-GGUF | QuantFactory | 2024-05-26T15:42:43Z | 439 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"base_model:fearlessdots/Llama-3-Alpha-Centauri-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-26T14:36:32Z | ---
license: llama3
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
library_name: transformers
pipeline_tag: text-generation
base_model: fearlessdots/Llama-3-Alpha-Centauri-v0.1
---
# Llama-3-Alpha-Centauri-v0.1-GGUF
This is quantized version of [fearlessdots/Llama-3-Alpha-Centauri-v0.1](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1) created using llama.cpp
## Disclaimer
**Note:** All models and LoRAs from the **Centaurus** series were created with the sole purpose of research. The usage of this model and/or its related LoRA implies agreement with the following terms:
- The user is responsible for what they might do with it, including how the output of the model is interpreted and used;
- The user should not use the model and its outputs for any illegal purposes;
- The user is the only one resposible for any misuse or negative consequences from using this model and/or its related LoRA.
I do not endorse any particular perspectives presented in the training data.
---
## Centaurus Series
This series aims to develop highly uncensored Large Language Models (LLMs) with the following focuses:
- Science, Technology, Engineering, and Mathematics (STEM)
- Computer Science (including programming)
- Social Sciences
And several key cognitive skills, including but not limited to:
- Reasoning and logical deduction
- Critical thinking
- Analysis
While maintaining strong overall knowledge and expertise, the models will undergo refinement through:
- Fine-tuning processes
- Model merging techniques including Mixture of Experts (MoE)
Please note that these models are experimental and may demonstrate varied levels of effectiveness. Your feedback, critique, or queries are most welcome for improvement purposes.
## Base
This model and its related LoRA was fine-tuned on [https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3).
## LoRA
The LoRA merged with the base model is available at [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA).
## GGUF
I provide some GGUF files here: [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF).
## Datasets
- [https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
## Fine Tuning
### - Quantization Configuration
- load_in_4bit=True
- bnb_4bit_quant_type="fp4"
- bnb_4bit_compute_dtype=compute_dtype
- bnb_4bit_use_double_quant=False
### - PEFT Parameters
- lora_alpha=64
- lora_dropout=0.05
- r=128
- bias="none"
### - Training Arguments
- num_train_epochs=1
- per_device_train_batch_size=1
- gradient_accumulation_steps=4
- optim="adamw_bnb_8bit"
- save_steps=25
- logging_steps=25
- learning_rate=2e-4
- weight_decay=0.001
- fp16=False
- bf16=False
- max_grad_norm=0.3
- max_steps=-1
- warmup_ratio=0.03
- group_by_length=True
- lr_scheduler_type="constant"
## Credits
- Meta ([https://huggingface.co/meta-llama](https://huggingface.co/meta-llama)): for the original Llama-3;
- HuggingFace: for hosting this model and for creating the fine-tuning tools used;
- failspy ([https://huggingface.co/failspy](https://huggingface.co/failspy)): for the base model and the orthogonalization implementation;
- NobodyExistsOnTheInternet ([https://huggingface.co/NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)): for the incredible dataset;
- Undi95 ([https://huggingface.co/Undi95](https://huggingface.co/Undi95)) and Sao10k ([https://huggingface.co/Sao10K](https://huggingface.co/Sao10K)): my main inspirations for doing these models =]
A huge thank you to all of them ☺️
## About Alpha Centauri
**Alpha Centauri** is a triple star system located in the constellation of **Centaurus**. It includes three stars: Rigil Kentaurus (also known as **α Centauri A**), Toliman (or **α Centauri B**), and Proxima Centauri (**α Centauri C**). Proxima Centauri is the nearest star to the Sun, residing at approximately 4.25 light-years (1.3 parsecs) away.
The primary pair, **α Centauri A** and **B**, are both similar to our Sun - **α Centauri A** being a class G star with 1.1 solar masses and 1.5 times the Sun's luminosity; **α Centauri B** having 0.9 solar masses and under half the luminosity of the Sun. They revolve around their shared center every 79 years following an elliptical path, ranging from 35.6 astronomical units apart (nearly Pluto's distance from the Sun) to 11.2 astronomical units apart (around Saturn's distance from the Sun.)
Proxima Centauri, or **α Centauri C**, is a diminutive, dim red dwarf (a class M star) initially unseen to the naked eye. At roughly 4.24 light-years (1.3 parsecs) from us, it lies nearer than **α Centauri AB**, the binary system. Presently, the gap between **Proxima Centauri** and **α Centauri AB** amounts to around 13,000 Astronomical Units (0.21 light-years)—comparable to over 430 times Neptune's orbital radius.
Two confirmed exoplanets accompany Proxima Centauri: **Proxima b**, discovered in 2016, is Earth-sized within the habitable zone; **Proxima d**, revealed in 2022, is a potential sub-Earth close to its host star. Meanwhile, disputes surround **Proxima c**, a mini-Neptune detected in 2019. Intriguingly, hints suggest that **α Centauri A** might possess a Neptune-sized object in its habitable region, but further investigation is required before confirming whether it truly exists and qualifies as a planet. Regarding **α Centauri B**, although once thought to harbor a planet (named **α Cen Bb**), subsequent research invalidated this claim, leaving it currently devoid of identified planets.
**Source:** retrived from [https://en.wikipedia.org/wiki/Alpha_Centauri](https://en.wikipedia.org/wiki/Alpha_Centauri) and processed with [https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). |
mradermacher/REBEL-OpenChat-3.5-i1-GGUF | mradermacher | 2024-05-30T04:41:39Z | 439 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:berkeley-nest/Nectar",
"base_model:Cornell-AGI/REBEL-OpenChat-3.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-29T07:28:26Z | ---
base_model: Cornell-AGI/REBEL-OpenChat-3.5
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Cornell-AGI/REBEL-OpenChat-3.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/REBEL-OpenChat-3.5-i1-GGUF/resolve/main/REBEL-OpenChat-3.5.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Ursidae-11b-Mini-i1-GGUF | mradermacher | 2024-06-01T16:28:25Z | 439 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"mistral",
"llama-2",
"agi",
"probelm solving",
"biology",
"reasoning",
"llama3",
"en",
"base_model:Cognitive-Machines-Labs/Ursidae-11b-Mini",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-31T14:44:21Z | ---
base_model: Cognitive-Machines-Labs/Ursidae-11b-Mini
language:
- en
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
tags:
- llama
- mistral
- llama-2
- llama
- agi
- probelm solving
- biology
- reasoning
- llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Cognitive-Machines-Labs/Ursidae-11b-Mini
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Ursidae-11b-Mini-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ1_S.gguf) | i1-IQ1_S | 2.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ1_M.gguf) | i1-IQ1_M | 3.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ2_S.gguf) | i1-IQ2_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ2_M.gguf) | i1-IQ2_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q2_K.gguf) | i1-Q2_K | 4.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ3_S.gguf) | i1-IQ3_S | 5.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ3_M.gguf) | i1-IQ3_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q4_0.gguf) | i1-Q4_0 | 6.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ursidae-11b-Mini-i1-GGUF/resolve/main/Ursidae-11b-Mini.i1-Q6_K.gguf) | i1-Q6_K | 9.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
RichardErkhov/Undi95_-_Toppy-M-7B-gguf | RichardErkhov | 2024-05-31T18:05:10Z | 439 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-31T15:31:32Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Toppy-M-7B - GGUF
- Model creator: https://huggingface.co/Undi95/
- Original model: https://huggingface.co/Undi95/Toppy-M-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Toppy-M-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Toppy-M-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Toppy-M-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Toppy-M-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Toppy-M-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Toppy-M-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Toppy-M-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Toppy-M-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Toppy-M-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Toppy-M-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Toppy-M-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Toppy-M-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Toppy-M-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Toppy-M-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Toppy-M-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Toppy-M-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Toppy-M-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Toppy-M-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Toppy-M-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Toppy-M-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Toppy-M-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [Toppy-M-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_Toppy-M-7B-gguf/blob/main/Toppy-M-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
<!-- description start -->
## Description
This repo contains fp16 files of Toppy-M-7B, a merge I have done with the new task_arithmetic merge method from mergekit.
This project was a request from [BlueNipples](https://huggingface.co/BlueNipples) : [link](https://huggingface.co/Undi95/Utopia-13B/discussions/1)
<!-- description end -->
<!-- description start -->
## Models and loras used
- [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5)
- [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)
- [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
- [lemonilia/AshhLimaRP-Mistral-7B](lemonilia/AshhLimaRP-Mistral-7B)
- [Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b)
- [Undi95/Mistral-pippa-sharegpt-7b-qlora](Undi95/Mistral-pippa-sharegpt-7b-qlora)
<!-- description end -->
## The sauce
```
openchat/openchat_3.5
lemonilia/AshhLimaRP-Mistral-7B (LoRA) x 0.38
NousResearch/Nous-Capybara-7B-V1.9
Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b x 0.27
HuggingFaceH4/zephyr-7b-beta
Undi95/Mistral-pippa-sharegpt-7b-qlora x 0.38
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
- model: mistralai/Mistral-7B-v0.1
- model: Undi95/zephyr-7b-beta-pippa-sharegpt
parameters:
weight: 0.42
- model: Undi95/Nous-Capybara-7B-V1.9-120-Days
parameters:
weight: 0.29
- model: Undi95/openchat_3.5-LimaRP-13B
parameters:
weight: 0.48
dtype: bfloat16
```
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
danielkosyra/pretraining1 | danielkosyra | 2024-06-03T17:43:09Z | 439 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-03T17:42:50Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: pretraining1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pretraining1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 320
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 1750
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.1539 | 0.2175 | 50 | 7.4415 |
| 6.7732 | 0.4350 | 100 | 6.2695 |
| 5.9925 | 0.6525 | 150 | 5.6597 |
| 5.5089 | 0.8699 | 200 | 5.3122 |
| 5.1352 | 1.0874 | 250 | 4.9753 |
| 4.8081 | 1.3049 | 300 | 4.6730 |
| 4.5188 | 1.5224 | 350 | 4.3827 |
| 4.2789 | 1.7399 | 400 | 4.0967 |
| 4.0202 | 1.9574 | 450 | 3.8260 |
| 3.7049 | 2.1749 | 500 | 3.6475 |
| 3.5838 | 2.3923 | 550 | 3.5435 |
| 3.4797 | 2.6098 | 600 | 3.4532 |
| 3.3985 | 2.8273 | 650 | 3.3774 |
| 3.3155 | 3.0448 | 700 | 3.3162 |
| 3.1677 | 3.2623 | 750 | 3.2712 |
| 3.1373 | 3.4798 | 800 | 3.2299 |
| 3.1123 | 3.6973 | 850 | 3.1968 |
| 3.0845 | 3.9147 | 900 | 3.1670 |
| 2.9699 | 4.1322 | 950 | 3.1366 |
| 2.9081 | 4.3497 | 1000 | 3.1155 |
| 2.8884 | 4.5672 | 1050 | 3.0914 |
| 2.895 | 4.7847 | 1100 | 3.0723 |
| 2.8741 | 5.0022 | 1150 | 3.0502 |
| 2.7174 | 5.2197 | 1200 | 3.0404 |
| 2.7323 | 5.4371 | 1250 | 3.0270 |
| 2.7269 | 5.6546 | 1300 | 3.0144 |
| 2.7096 | 5.8721 | 1350 | 3.0003 |
| 2.6517 | 6.0896 | 1400 | 2.9921 |
| 2.5758 | 6.3071 | 1450 | 2.9867 |
| 2.5855 | 6.5246 | 1500 | 2.9795 |
| 2.577 | 6.7421 | 1550 | 2.9690 |
| 2.5656 | 6.9595 | 1600 | 2.9590 |
| 2.4813 | 7.1770 | 1650 | 2.9604 |
| 2.4747 | 7.3945 | 1700 | 2.9563 |
| 2.4696 | 7.6120 | 1750 | 2.9544 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Jsoo/Llama3-beomi-Open-Ko-8B-Instruct-preview-test6 | Jsoo | 2024-06-04T08:05:55Z | 439 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-04T04:06:21Z | ---
language:
- en
- ko
license: other
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
---
# Model Card for Model ID
## Model Details
Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.
With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).
**Sample usage**
```
from transformers import pipeline
import torch
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
truncation=True
)
def extract_response_llama3(question):
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": question},
]
prompt = pipe.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipe.tokenizer.eos_token_id,
pipe.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipe(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.1,
top_p=0.9,
num_return_sequences=1
)
return outputs[0]['generated_text'].split('\n')[-1]
question = "예산을 분배할 때 사업의 우선 순위를 정해서 차등 지원하는 방법을 뭐라고 하지"
response = extract_response_llama3(question)
print(response)
question = "미세먼지 생성물질의 배출을 저감하고 종합적으로 관리하기 위한 법을 어디서 제정했니"
response = extract_response_llama3(question)
print(response)
question = "어떤 장소의 대기오염을 방지하기 위한 정책의 법적 근거가 특별법의 제정으로 준비되었지"
response = extract_response_llama3(question)
print(response)
```
**Sample Output**
```
선택과 집중
환경부
항만
```
|
mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF | mradermacher | 2024-06-29T05:40:54Z | 439 | 0 | transformers | [
"transformers",
"gguf",
"yi",
"moe",
"DPO",
"en",
"base_model:cloudyu/Yi-34Bx2-MoE-60B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-13T07:15:14Z | ---
base_model: cloudyu/Yi-34Bx2-MoE-60B-DPO
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
tags:
- yi
- moe
- DPO
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cloudyu/Yi-34Bx2-MoE-60B-DPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q2_K.gguf) | Q2_K | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.IQ3_XS.gguf) | IQ3_XS | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q3_K_S.gguf) | Q3_K_S | 26.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.IQ3_S.gguf) | IQ3_S | 26.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.IQ3_M.gguf) | IQ3_M | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q3_K_M.gguf) | Q3_K_M | 29.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q3_K_L.gguf) | Q3_K_L | 31.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.IQ4_XS.gguf) | IQ4_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q4_K_S.gguf) | Q4_K_S | 34.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q4_K_M.gguf) | Q4_K_M | 36.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q5_K_S.gguf) | Q5_K_S | 42.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q5_K_M.gguf) | Q5_K_M | 43.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q6_K.gguf) | Q6_K | 50.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.Q8_0.gguf.part2of2) | Q8_0 | 64.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf | RichardErkhov | 2024-06-18T06:56:33Z | 439 | 0 | null | [
"gguf",
"arxiv:2402.13228",
"region:us"
]
| null | 2024-06-17T14:28:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Smaug-72B-v0.1 - GGUF
- Model creator: https://huggingface.co/abacusai/
- Original model: https://huggingface.co/abacusai/Smaug-72B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Smaug-72B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.Q2_K.gguf) | Q2_K | 25.22GB |
| [Smaug-72B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.IQ3_XS.gguf) | IQ3_XS | 27.88GB |
| [Smaug-72B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.IQ3_S.gguf) | IQ3_S | 29.4GB |
| [Smaug-72B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.Q3_K_S.gguf) | Q3_K_S | 29.4GB |
| [Smaug-72B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.IQ3_M.gguf) | IQ3_M | 30.98GB |
| [Smaug-72B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.Q3_K.gguf) | Q3_K | 32.85GB |
| [Smaug-72B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.Q3_K_M.gguf) | Q3_K_M | 32.85GB |
| [Smaug-72B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.Q3_K_L.gguf) | Q3_K_L | 35.85GB |
| [Smaug-72B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/blob/main/Smaug-72B-v0.1.IQ4_XS.gguf) | IQ4_XS | 36.41GB |
| [Smaug-72B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q4_0 | 38.19GB |
| [Smaug-72B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | IQ4_NL | 38.42GB |
| [Smaug-72B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q4_K_S | 38.45GB |
| [Smaug-72B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q4_K | 40.77GB |
| [Smaug-72B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q4_K_M | 40.77GB |
| [Smaug-72B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q4_1 | 42.32GB |
| [Smaug-72B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q5_0 | 46.46GB |
| [Smaug-72B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q5_K_S | 46.46GB |
| [Smaug-72B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q5_K | 47.79GB |
| [Smaug-72B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q5_K_M | 47.79GB |
| [Smaug-72B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q5_1 | 50.59GB |
| [Smaug-72B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q6_K | 55.24GB |
| [Smaug-72B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Smaug-72B-v0.1-gguf/tree/main/) | Q8_0 | 71.55GB |
Original model description:
---
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
base_model: moreh/MoMo-72B-lora-1.8.7-DPO
---


Smaug arrives!
We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to surpass an average score of 80%.
Smaug-72B is finetuned directly from [moreh/MoMo-72B-lora-1.8.7-DPO](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO) and is ultimately based on [Qwen-72B](https://huggingface.co/Qwen/Qwen-72B).
We created Smaug-72B-v0.1 using a new fine-tuning technique, DPO-Positive (DPOP), and new pairwise preference versions of ARC, HellaSwag, and MetaMath (as well as other existing datasets). We introduce the technique and the full training details in our new paper: https://arxiv.org/abs/2402.13228.
We show that on datasets in which the edit distance between pairs of completions is low (such as in math-based datasets), standard DPO loss can lead to a reduction of the model's
likelihood of the preferred examples, as long as the relative probability between the preferred and dispreferred classes increases.
Using these insights, we design DPOP, a new loss function and training procedure which avoids this failure mode.
Surprisingly, we also find that DPOP outperforms DPO across a wide variety of datasets and downstream tasks, including datasets with high edit distances between completions.
We believe this new approach is generally useful in training across a wide range of model types and downstream use cases, and it powers all of our Smaug models.
With the release of our paper and datasets, we are excited for the open source community to continue to build on and improve Smaug and spawn more dragons to dominate the LLM space!
### Evaluation Results
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- |
| 80.48 | 76.02 | 89.27 | 77.15 | 76.67 | 85.08 | 78.70 |
#### MT-Bench
We ran MT-Bench with the llama-2 conversation template and the system prompt set to the Qwen system prompt. We got the following
results in single model mode:
| First Turn | Second Turn | Average |
| ---------- | ----------- | ------- |
| 8.18 | 7.34 | 7.76 |
The bottom of the card includes some sample responses from the model. We have selected responses that have high and low scores
from the GPT-4 judge.
### Contamination Results
We generate our contamination numbers using https://github.com/swj0419/detect-pretrain-code-contamination/tree/master, with Llama7B as our reference model.
Smaug-72B has the following results:
| ARC | TruthfulQA | GSM8K |
| --- | --- | --- |
| 0.20| 0.45| 1.00|
By comparison, MoMo-72B-lora-1.8.7-DPO has the following results:
| ARC | TruthfulQA | GSM8K |
| --- | --- | --- |
| 0.20| 0.39| 1.00|
Note that GSM8K often scores very highly on this contamination suite - we verified this by also running Llama-2-70B:
| ARC | TruthfulQA | GSM8K |
| --- | --- | --- |
| 0.22| 0.51| 0.89|
### Sample MT Bench results
#### Example 1
**Question**
Imagine you are writing a blog post comparing two popular smartphone models.
Develop an outline for the blog post, including key points and subheadings to effectively
compare and contrast the features, performance, and user experience of the two models.
Please answer in fewer than 200 words.
**Response**
Outline:
1. Introduction to the two smartphone models being compared (mention brand names and models)
2. Design and Build Quality
- Comparison of dimensions, weight, materials used, and overall aesthetic appeal
- Water and dust resistance, if applicable
3. Display and Screen Quality
- Comparison of screen technology (OLED, IPS LCD, etc.)
- Display resolution, refresh rate, and brightness levels
4. Performance and Hardware
- Processor, RAM, and storage configurations
- Comparison of benchmark scores (e.g., AnTuTu, Geekbench)
5. Camera System and Image Quality
- Comparison of primary, ultra-wide, telephoto, and macro lenses (if available)
- Comparison of low-light performance and image processing capabilities
6. Battery Life and Charging
- Battery capacity and estimated usage time
- Charging technologies (wired, wireless, fast charging, reverse charging)
7. Software and User Experience
- Comparison of operating systems (iOS vs. Android) and their features
- User interface, customization options, and app ecosystem
8. Additional Features and Extras
- Comparison of biometric authentication (fingerprint, face unlock, etc.)
- Additional features like NFC, 5G connectivity, and audio quality
9. Pricing and Value for Money
- Comparison of retail prices and current offers
- Assessment of overall value considering the features and performance
10. Conclusion and recommendation for potential buyers based on their preferences
#### Example 2
**Question**
In a survey conducted at a local high school, preferences for a new school color were measured:
58% of students liked the color blue, 45% preferred green, and 22% liked both colors.
If we randomly pick a student from the school, what's the probability that they would like neither blue nor green?
**Answer**
To find the probability that a randomly picked student would like neither blue nor green, we need to find the percentage of students who like only one color or neither color.
From the given information, we know:
- 58% like blue
- 45% prefer green
- 22% like both colors
To find the percentage of students who like only one color, we can subtract the percentage of students who like both colors from the sum of students who like blue and green:
- 58% (blue) + 45% (green) - 22% (both) = 83%
Now, to find the probability of a student liking neither blue nor green, we subtract the percentage of students who like at least one color from 100%:
- 100% - 83% = 17%
So, the probability that a randomly picked student would like neither blue nor green is 17%.
#### Example 3
**Question**
Implement a program to find the common elements in two arrays without using any extra data structures.
**Answer**
Finding common elements in two arrays without using extra data structures can be achieved by using a simple algorithm. Here's a Python implementation of this problem:
```python
def find_common_elements(arr1, arr2):
common_elements = []
for num1 in arr1:
if num1 in arr2 and num1 not in common_elements:
common_elements.append(num1)
return common_elements
# Example usage:
array1 = [1, 2, 3, 4, 5]
array2 = [4, 5, 6, 7, 8]
print(find_common_elements(array1, array2)) # Output: [4, 5]
```
This program iterates through the elements of the first array (arr1) and checks if each element is present in the second array (arr2). If it is, and the element hasn't been added to the common_elements list yet, it is appended. The function then returns the list of common elements.
#### Example 4
**Question**
Identify the named entities (people, organizations, locations) mentioned in the given news article.
Please generate a JSON dictionary that lists the named entities in three separate groups based on their entity types.
The key is the type of entity and the value is a list of strings.
Yesterday, Adamson Emerson, the CEO of Faraday, and Dieter Zetsche, the CEO of Daimler AG, announced plans to
build a new Gigafactory in Berlin. The facility will be a joint venture between Faraday and Daimler, producing
electric vehicles and battery packs for both companies, creating thousands of job opportunities in the region.
Emerson and Zetsche stated that the strategic location of Berlin, coupled with its skilled workforce and strong infrastructure,
makes it an ideal choice for expansion. The new Gigafactory aims to meet the growing demand for electric vehicles in Europe
and contribute to a sustainable future. Volkswagen CEO Herbert Diess welcomed the news, saying greater collaboration will,
benefit the auto industry's transition to e-mobility.
**Answer**
```
{
"people": ["Adamson Emerson", "Dieter Zetsche", "Herbert Diess"],
"organizations": ["Faraday", "Daimler AG", "Volkswagen"],
"locations": ["Berlin", "Europe"]
}
```
### Citation
Please cite the paper if you use data, model, or method in this repo.
```
@article{pal2024smaug,
title={Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive},
author={Pal, Arka and Karkhanis, Deep and Dooley, Samuel and Roberts, Manley and Naidu, Siddartha and White, Colin},
journal={arXiv preprint arXiv:2402.13228},
year={2024}
}
```
|
Ammartatox/llama3-mawared-2E-Q4_K_M-GGUF | Ammartatox | 2024-06-23T11:30:01Z | 439 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Ammartatox/llama3-mawared-2E",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-23T11:29:40Z | ---
base_model: Ammartatox/llama3-mawared-2E
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-repo
---
# Ammartatox/llama3-mawared-2E-Q4_K_M-GGUF
This model was converted to GGUF format from [`Ammartatox/llama3-mawared-2E`](https://huggingface.co/Ammartatox/llama3-mawared-2E) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Ammartatox/llama3-mawared-2E) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ammartatox/llama3-mawared-2E-Q4_K_M-GGUF --hf-file llama3-mawared-2e-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ammartatox/llama3-mawared-2E-Q4_K_M-GGUF --hf-file llama3-mawared-2e-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ammartatox/llama3-mawared-2E-Q4_K_M-GGUF --hf-file llama3-mawared-2e-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ammartatox/llama3-mawared-2E-Q4_K_M-GGUF --hf-file llama3-mawared-2e-q4_k_m.gguf -c 2048
```
|
NikolayKozloff/Viking-7B-Q4_K_S-GGUF | NikolayKozloff | 2024-06-29T19:05:22Z | 439 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-7B",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T19:05:03Z | ---
base_model: LumiOpen/Viking-7B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Viking-7B-Q4_K_S-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-7B`](https://huggingface.co/LumiOpen/Viking-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-7B-Q4_K_S-GGUF --hf-file viking-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-7B-Q4_K_S-GGUF --hf-file viking-7b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-7B-Q4_K_S-GGUF --hf-file viking-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-7B-Q4_K_S-GGUF --hf-file viking-7b-q4_k_s.gguf -c 2048
```
|
Luyu/condenser | Luyu | 2021-08-13T13:38:57Z | 438 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | Entry not found |
inokufu/flaubert-base-uncased-xnli-sts-finetuned-education | inokufu | 2022-07-26T10:59:20Z | 438 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"flaubert",
"feature-extraction",
"sentence-similarity",
"transformers",
"Education",
"fr",
"xnli",
"stsb_multi_mt",
"dataset:xnli",
"dataset:stsb_multi_mt",
"arxiv:1810.04805",
"arxiv:1809.05053",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
language: fr
tags:
- sentence-similarity
- transformers
- Education
- fr
- flaubert
- sentence-transformers
- feature-extraction
- xnli
- stsb_multi_mt
datasets:
- xnli
- stsb_multi_mt
---
# inokufu/bertheo
A [sentence-transformers](https://www.SBERT.net) model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Details
This model is based on the French flaubert-base-uncased pre-trained model [1, 2].
It was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [3]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences.
It was then fine-tuned on a natural language inference task (XNLI) [4]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).
It was then fine-tuned on a text semantic similarity task (on STS-fr data) [5]. This task consists in training the model to estimate the similarity between two sentences.
This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Apprendre le python", "Devenir expert en comptabilité"]
model = SentenceTransformer('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Apprendre le python", "Devenir expert en comptabilité"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education')
model = AutoModel.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
STS (fr) score: 83.05%
## Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: FlaubertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## References
[1] https://hal.archives-ouvertes.fr/hal-02784776v3/document <br>
[2] https://huggingface.co/flaubert/flaubert_base_uncased <br>
[3] https://arxiv.org/abs/1810.04805 <br>
[4] https://arxiv.org/abs/1809.05053 <br>
[5] https://huggingface.co/datasets/stsb_multi_mt <br>
|
savasy/bert-base-turkish-squad | savasy | 2024-02-01T09:17:38Z | 438 | 12 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"tr",
"arxiv:2401.17396",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: tr
---
# Turkish SQuAD Model : Question Answering
I fine-tuned Turkish-Bert-Model for Question-Answering problem with Turkish version of SQuAD; TQuAD
* BERT-base: https://huggingface.co/dbmdz/bert-base-turkish-uncased
* TQuAD dataset: https://github.com/TQuad/turkish-nlp-qa-dataset
# Citation
Please cite if you use it in your study
```
@misc{yildirim2024finetuning,
title={Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks},
author={Savas Yildirim},
year={2024},
eprint={2401.17396},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@book{yildirim2021mastering,
title={Mastering Transformers: Build state-of-the-art models from scratch with advanced natural language processing techniques},
author={Yildirim, Savas and Asgari-Chenaghlu, Meysam},
year={2021},
publisher={Packt Publishing Ltd}
}
```
# Training Code
```
!python3 run_squad.py \
--model_type bert \
--model_name_or_path dbmdz/bert-base-turkish-uncased\
--do_train \
--do_eval \
--train_file trainQ.json \
--predict_file dev1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 5.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir "./model"
```
# Example Usage
> Load Model
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
import torch
tokenizer = AutoTokenizer.from_pretrained("savasy/bert-base-turkish-squad")
model = AutoModelForQuestionAnswering.from_pretrained("savasy/bert-base-turkish-squad")
nlp=pipeline("question-answering", model=model, tokenizer=tokenizer)
```
> Apply the model
```
sait="ABASIYANIK, Sait Faik. Hikayeci (Adapazarı 23 Kasım 1906-İstanbul 11 Mayıs 1954). \
İlk öğrenimine Adapazarı’nda Rehber-i Terakki Mektebi’nde başladı. İki yıl kadar Adapazarı İdadisi’nde okudu.\
İstanbul Erkek Lisesi’nde devam ettiği orta öğrenimini Bursa Lisesi’nde tamamladı (1928). İstanbul Edebiyat \
Fakültesi’ne iki yıl devam ettikten sonra babasının isteği üzerine iktisat öğrenimi için İsviçre’ye gitti. \
Kısa süre sonra iktisat öğrenimini bırakarak Lozan’dan Grenoble’a geçti. Üç yıl başıboş bir edebiyat öğrenimi \
gördükten sonra babası tarafından geri çağrıldı (1933). Bir müddet Halıcıoğlu Ermeni Yetim Mektebi'nde Türkçe \
gurup dersleri öğretmenliği yaptı. Ticarete atıldıysa da tutunamadı. Bir ay Haber gazetesinde adliye muhabirliği\
yaptı (1942). Babasının ölümü üzerine aileden kalan emlakin geliri ile avare bir hayata başladı. Evlenemedi.\
Yazları Burgaz adasındaki köşklerinde, kışları Şişli’deki apartmanlarında annesi ile beraber geçen bu fazla \
içkili bohem hayatı ömrünün sonuna kadar sürdü."
print(nlp(question="Ne zaman avare bir hayata başladı?", context=sait))
print(nlp(question="Sait Faik hangi Lisede orta öğrenimini tamamladı?", context=sait))
```
```
# Ask your self ! type your question
print(nlp(question="...?", context=sait))
```
Check My other Model
https://huggingface.co/savasy
|
facebook/roscoe-512-roberta-base | facebook | 2023-01-12T17:07:11Z | 438 | 5 | transformers | [
"transformers",
"pytorch",
"roberta",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-26T16:49:49Z | ---
license: cc-by-nc-4.0
---
# roscoe-512-roberta-base
## Model description
Sentence embedding model for reasoning steps.
To obtain reasoning step embeddings, we finetune SimCSE (Gao et al., 2021), a
supervised sentence similarity model extending the RoBERTa word embedding model (Liu et al., 2019) on
multi-step reasoning datasets we listed in §5 (see details in Golovneva et al., 2022). SimCSE is a contrastive learning model
that is trained on triplets of reference reasoning steps, positive and hard-negative hypothesis reasoning steps
to minimize the cross-entropy objective with in-batch negatives. For contrastive learning, we use the context
and reference reasoning steps as a positive sample, and context and perturbed reference steps as
hard-negative pairs. With finetuned model we embed each individual step, as well as a reasoning chain as a
whole. We use the pretrained checkpoint of supervised SimCSE model sup-simcse-roberta-base to initialize
our model, and further train it for five epochs on our synthetic train data.
## Training data
To train the model, we construct dataset by generating perturbations — i.e.,
deterministic modifications — on half of the reference reasoning steps in the following sets: Entailment-Bank
(deductive reasoning), ProofWriter (logical reasoning); three arithmetic reasoning datasets MATH, ASDIV and AQUA; EQASC
(explanations for commonsense question answering), and StrategyQA (question answering with implicit reasoning strategies).
## References
1. Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence embeddings.
arXiv preprint arXiv:2104.08821, 2021.
2. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv
preprint arXiv:1907.11692, 2019.
3. Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, and Asli Celikyilmaz.
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning. arXiv:2212.07919, 2022.
## Citation
@article{golovneva2022roscoe,
title={{ROSCOE}: A Suite of Metrics for Scoring Step-by-Step Reasoning},
author={Golovneva, Olga and Chen, Moya and Poff, Spencer and Corredor, Martin and Zettlemoyer, Luke and Fazel-Zarandi, Maryam and Celikyilmaz, Asli},
journal={arXiv preprint arXiv:2212.07919},
year={2022}
} |
heegyu/kobart-text-style-transfer | heegyu | 2022-12-18T10:39:49Z | 438 | 6 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-12-18T09:59:31Z | kobart 모델에 Korean Smilestyle Dataset을 파인튜닝한 한국어 텍스트 스타일 변환 모델
예제
```
styles = ['문어체','구어체','안드로이드','아재','채팅',
'초등학생','이모티콘','enfp','신사','할아버지','할머니','중학생',
'왕','나루토','선비','소심한','번역기']
model = pipeline(
'text2text-generation',
model='heegyu/kobart-text-style-transfer'
)
def transfer_text_style(text, target_style, **kwargs):
input = f"{target_style} 말투로 변환:{text}"
out = model(input, max_length=64, **kwargs)
print(text, target_style, out[0]['generated_text'], sep="->")
text = "반가운. 나는 6마리의 고양이를 소지하고 있다."
for style in styles:
transfer_text_style(text, style)
```
결과
```
반가운. 나는 6마리의 고양이를 소지하고 있다.->문어체->안녕하세요. 저는 6마리의 고양이를 가지고 있습니다.
반가운. 나는 6마리의 고양이를 소지하고 있다.->구어체->안녕. 나는 6마리의 고양이를 가지고 있어.
반가운. 나는 6마리의 고양이를 소지하고 있다.->안드로이드->반갑다. 안드로이드. 6마리. 고양이. 보유.
반가운. 나는 6마리의 고양이를 소지하고 있다.->아재->안녕~~~~ 6마리의 고양이를 가지고 있네
반가운. 나는 6마리의 고양이를 소지하고 있다.->채팅->하이~ 6마리의 고양이 있음
반가운. 나는 6마리의 고양이를 소지하고 있다.->초등학생->ᄒᄋ 난 6마리 고양이 ᄏᄏ
반가운. 나는 6마리의 고양이를 소지하고 있다.->이모티콘->안녕!~()~ 난 6마리의 고양이를 가지고 있어 (皿)
반가운. 나는 6마리의 고양이를 소지하고 있다.->enfp->안녕!!~ 난 6마리의 고양이를 둬!
반가운. 나는 6마리의 고양이를 소지하고 있다.->신사->안녕하십니까, 저는 6마리의 고양이를 가지고 있습니다.
반가운. 나는 6마리의 고양이를 소지하고 있다.->할아버지->안녕하신가...나는 6마리의 고양이를 가지고 있구먼...
반가운. 나는 6마리의 고양이를 소지하고 있다.->할머니->염병 염병할 고양이 놈이여
반가운. 나는 6마리의 고양이를 소지하고 있다.->중학생->ᄒᄋ 난 6마리 고양이 키우는데
반가운. 나는 6마리의 고양이를 소지하고 있다.->왕->반갑소. 짐은 6마리의 고양이를 소유하고 있소.
반가운. 나는 6마리의 고양이를 소지하고 있다.->나루토->안녕하냐니깐! 난 6마리의 고양이를 가지고 있다니깐!
반가운. 나는 6마리의 고양이를 소지하고 있다.->선비->안녕하시오! 소생은 6마리의 고양이를 가지고 있소!
반가운. 나는 6마리의 고양이를 소지하고 있다.->소심한->안녕.... 난 6마리 고양이 있어..
반가운. 나는 6마리의 고양이를 소지하고 있다.->번역기->반가운, 나는 6마리의 고양이를 가지고 있다.
``` |
DucHaiten/DucHaitenDreamWorld | DucHaiten | 2023-03-28T15:19:02Z | 438 | 24 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-06T16:26:07Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
inference: true
---
After many days of not eating well, sleeping 4 hours at night. Finally, version 2.4.1 of the DucHaitenDreamWorld model is also completed, it will be a huge improvement, just looking at the sample image is enough to understand how great it is. At least not as bad as the previous version :)
Dream World is my model for art like Disney, Pixar.
xformer on, no ave (I haven't tried it with vae so I don't know if it's good or bad)
Please support me by becoming a patron:
https://www.patreon.com/duchaitenreal







![00376-1484770875-[uploaded e621], by Pino Daeni, by Ruan Jia, by Fumiko, by Alayna Lemmer, by Carlo Galli Bibiena, solo female ((Vulpix)) with ((.png](https://s3.amazonaws.com/moonup/production/uploads/1676126509917-630b58b279d18d5e53e3a5a9.png)



|
stablediffusionapi/abyssorangemix2nsfw | stablediffusionapi | 2023-05-19T05:02:25Z | 438 | 2 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-01T05:04:00Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# AbyssOrangeMix2NSFW API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "abyssorangemix2nsfw"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/abyssorangemix2nsfw)
Credits: [View credits](https://civitai.com/?query=AbyssOrangeMix2NSFW)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "abyssorangemix2nsfw",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
sail-rvc/21savage | sail-rvc | 2023-07-14T07:18:07Z | 438 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-07-14T07:17:24Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# 21savage
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:18:07
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
vishnun/codenlbert-sm | vishnun | 2023-08-03T18:01:06Z | 438 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"nlp",
"code",
"classification",
"en",
"dataset:vishnun/CodevsNL",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-30T04:40:34Z | ---
datasets:
- vishnun/CodevsNL
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- nlp
- code
- classification
---
## PreFace
Code vs Natural language classification using bert-small from prajwall, below are the metrics achieved
## Training Metrics
| Epoch | Training Loss | Validation Loss | Accuracy | |
|------:|--------------:|----------------:|---------:|---|
| 1 | 0.022500 | 0.012705 | 0.997203 | |
| 2 | 0.008700 | 0.013107 | 0.996880 | |
| 3 | 0.002700 | 0.014081 | 0.997633 | |
| 4 | 0.001800 | 0.010666 | 0.997526 | |
| 5 | 0.000900 | 0.010800 | 0.998063 | |
## More
- Github repo for installable python package: https://github.com/Vishnunkumar
- Space on the extraction of code blocks from screenshots: https://huggingface.co/spaces/vishnun/SnapCode |
TheBloke/LLongMA-2-7B-GGUF | TheBloke | 2023-09-27T12:47:41Z | 438 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2108.12409",
"arxiv:2212.10554",
"base_model:conceptofmind/LLongMA-2-7b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T15:20:47Z | ---
license: llama2
model_name: LLongMA 2 7B
base_model: conceptofmind/LLongMA-2-7b
inference: false
model_creator: Enrico Shippole
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LLongMA 2 7B - GGUF
- Model creator: [Enrico Shippole](https://huggingface.co/conceptofmind)
- Original model: [LLongMA 2 7B](https://huggingface.co/conceptofmind/LLongMA-2-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [ConceptofMind's LLongMA 2 7B](https://huggingface.co/conceptofmind/LLongMA-2-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLongMA-2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLongMA-2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF)
* [Enrico Shippole's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/conceptofmind/LLongMA-2-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llongma-2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llongma-2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llongma-2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llongma-2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llongma-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llongma-2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llongma-2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llongma-2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llongma-2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llongma-2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llongma-2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llongma-2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF/blob/main/llongma-2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLongMA-2-7B-GGUF and below it, a specific filename to download, such as: llongma-2-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLongMA-2-7B-GGUF llongma-2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLongMA-2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLongMA-2-7B-GGUF llongma-2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llongma-2-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/LLongMA-2-7B-GGUF", model_file="llongma-2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: ConceptofMind's LLongMA 2 7B
LLongMA-2, a suite of Llama-2 models, trained at 8k context length using linear positional interpolation scaling. The model was trained in collaboration with Emozilla of NousResearch and Kaiokendev.
We worked directly with Kaiokendev, to extend the context length of the Llama-2 7b model through fine-tuning. The models pass all our evaluations and maintain the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies.
The model has identical performance to LLaMA 2 under 4k context length, performance scales directly to 8k, and works out-of-the-box with the new version of transformers (4.31) or with `trust_remote_code` for <= 4.30.
A Llama-2 13b model trained at 8k will release soon on huggingface here: https://huggingface.co/conceptofmind/LLongMA-2-13b
Applying the method to the rotary position embedding requires only slight changes to the model's code by dividing the positional index, t, by a scaling factor.
The repository containing u/emozilla’s implementation of scaled rotary embeddings can be found here: https://github.com/jquesnelle/scaled-rope
If you would like to learn more about scaling rotary embeddings, I would strongly recommend reading u/kaiokendev's blog posts on his findings: https://kaiokendev.github.io/
A PR to add scaled rotary embeddings to Huggingface transformers has been added by u/joao_gante and merged: https://github.com/huggingface/transformers/pull/24653
The model was trained for ~1 billion tokens on Togethercompute's Red Pajama dataset. The context length of the examples varies: https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T
The pre-tokenized dataset will be available here for you to use soon: https://huggingface.co/datasets/conceptofmind/rp-llama-2-7b-tokenized-chunked
I would also recommend checking out the phenomenal research by Ofir Press on ALiBi which laid the foundation for many of these scaling techniques: https://arxiv.org/abs/2108.12409
It is also worth reviewing the paper, A Length-Extrapolatable Transformer, and xPos technique which also applies scaling to rotary embeddings: https://arxiv.org/pdf/2212.10554.pdf
We previously trained the first publicly available model with rotary embedding scaling here: https://twitter.com/EnricoShippole/status/1655599301454594049?s=20
A Llama-2 13b model trained at 8k will release soon. As well as a suite of Llama-2 models trained at 16k context lengths will be released soon.
You can find out more about the NousResearch organization here: https://huggingface.co/NousResearch
The compute for this model release is all thanks to the generous sponsorship by CarperAI, Emad Mostaque, and StabilityAI. This is not an official StabilityAI product.
If you have any questions about the data or model be sure to reach out and ask! I will try to respond promptly.
The previous suite of LLongMA model releases can be found here: https://twitter.com/EnricoShippole/status/1677346578720256000?s=20
All of the models can be found on Huggingface: https://huggingface.co/conceptofmind
You can find the Llama-2 usage policy here: https://ai.meta.com/llama/use-policy/
Llama 2 Community License Agreement
Llama 2 Version Release Date: July 18, 2023
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
“Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
“Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
<!-- original-model-card end -->
|
TheBloke/Karen_theEditor_13B-GGUF | TheBloke | 2023-09-27T12:53:15Z | 438 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"lora",
"base_model:FPHam/Karen_theEditor_13b_HF",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T01:45:12Z | ---
license: other
tags:
- lora
model_name: Karen The Editor 13B
base_model: FPHam/Karen_theEditor_13b_HF
inference: false
model_creator: FPHam
model_type: llama
prompt_template: 'You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Karen The Editor 13B - GGUF
- Model creator: [FPHam](https://huggingface.co/FPHam)
- Original model: [Karen The Editor 13B](https://huggingface.co/FPHam/Karen_theEditor_13b_HF)
<!-- description start -->
## Description
This repo contains GGUF format model files for [FPHam's Karen The Editor 13B](https://huggingface.co/FPHam/Karen_theEditor_13b_HF).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Karen_theEditor_13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Karen_theEditor_13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF)
* [FPHam's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FPHam/Karen_theEditor_13b_HF)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna-Short
```
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Karen-The-Editor.Q2_K.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [Karen-The-Editor.Q3_K_S.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [Karen-The-Editor.Q3_K_M.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [Karen-The-Editor.Q3_K_L.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [Karen-The-Editor.Q4_0.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Karen-The-Editor.Q4_K_S.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [Karen-The-Editor.Q4_K_M.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [Karen-The-Editor.Q5_0.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Karen-The-Editor.Q5_K_S.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [Karen-The-Editor.Q5_K_M.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [Karen-The-Editor.Q6_K.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [Karen-The-Editor.Q8_0.gguf](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGUF/blob/main/Karen-The-Editor.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Karen_theEditor_13B-GGUF and below it, a specific filename to download, such as: Karen-The-Editor.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Karen_theEditor_13B-GGUF Karen-The-Editor.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Karen_theEditor_13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Karen_theEditor_13B-GGUF Karen-The-Editor.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m Karen-The-Editor.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Karen_theEditor_13B-GGUF", model_file="Karen-The-Editor.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: FPHam's Karen The Editor 13B
<!-- header start -->
<div style="width: 100%;">
<img src="https://media.tenor.com/frGCmLDFbkMAAAAC/karen-ok.gif" alt="FPHam's Karen" style="width: 30%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
## Karen is an editor for your fiction. (v.0.2)
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in wit and wisdom but respectfull of your style.
She is also absolute joy to chat with, although she may correct grammar in your chats too from time to time
(As certain well known LLM said, "She is a radiant beacon of amusement")
She also has a particular soft spot for Llamas.
## Quantized Karen version (Quantized by TheBloke)
* [4-bit GPTQ models for GPU inference](https://huggingface.co/FPHam/Karen_theEditor-13B-4bit-128g-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
Karen gets triggered by this prompt (pun intended):
```
USER: Edit the following for spelling and grammar mistakes:
ASSISTANT:
```
Feed it a paragraph at a time - that's where she works best - a few sentences.
Here is my instruct yaml for oobabooga webui (in Instruct mode):
```
user: "USER: Edit the following for spelling and grammar mistakes: "
bot: "ASSISTANT:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n"
context: "Assistant is a meticulous senior editor with a specialization in editing fictional stories. When given text, Assistant detects and corrects grammatical errors, including subject-verb agreement, tense consistency, punctuation, capitalization, use of correct articles and correct present perfect and past perfect tense.\n"
```
I am not sure to what extend the context part has any effect. Doesn't hurt though.
This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
## Example:
Input
```
USER: Edit the following for spelling and grammar mistakes:
unce upon time lived name John who have pet monkey Jackie.
one day while walkin down the street he saw his friend Tom coming to him.
"Hey Tom" said John "hows it going?"
Tom replyied "Not too bad how 'bout yourself?". as they was talk Jackie sudden jump onto Tom shoulder and started play with his hair.
Tom look at John and ask "Is that yours?"
John reply "Yeah, this crazy lil guy follow me everywhere" just then Jackie grab hold of Tom glasses and try to take thmm off.
Tom struggle to keep his balance as he laugh hystericaly.
```
Output
```
ASSISTANT: Once upon a time there lived a man named John who had a pet monkey called Jackie.
One day while walking down the street he saw his friend Tom approaching him.
"Hey Tom," said John. "How's it going?"
Tom replied, "Not too bad, how about yourself?" As they were talking, Jackie suddenly jumped onto Tom's shoulder and began playing with his hair.
Tom looked at John and asked, "Is that yours?"
John replied, "Yeah, this crazy little guy follows me everywhere." Just then Jackie grabbed hold of Tom's glasses and tried to take them off.
Tom struggled to keep his balance as he laughed hysterically.
```
## Goal: to create the best grammar checker you have ever seen
## To do:
- train on larger dataset, big, enormous, gargantuan
- see if finetuning on just plain LLAMA without Vicuna would work better or worse (the theory is that it will be very focused on editing and nothing else)
- explore what different settings (temperature, top_p, top_k do for this type of finetune)
- create Rachel, the paraphrasing editor
<!-- original-model-card end -->
|
TheBloke/Euryale-1.3-L2-70B-GGUF | TheBloke | 2023-10-18T09:45:31Z | 438 | 13 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:Sao10K/Euryale-1.3-L2-70B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-18T09:22:19Z | ---
base_model: Sao10K/Euryale-1.3-L2-70B
inference: false
language:
- en
license: llama2
model_creator: Saofiq
model_name: Euryale 1.3 L2 70B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Euryale 1.3 L2 70B - GGUF
- Model creator: [Saofiq](https://huggingface.co/Sao10K)
- Original model: [Euryale 1.3 L2 70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Saofiq's Euryale 1.3 L2 70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF)
* [Saofiq's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [euryale-1.3-l2-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [euryale-1.3-l2-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [euryale-1.3-l2-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [euryale-1.3-l2-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [euryale-1.3-l2-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [euryale-1.3-l2-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [euryale-1.3-l2-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [euryale-1.3-l2-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [euryale-1.3-l2-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [euryale-1.3-l2-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF/blob/main/euryale-1.3-l2-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| euryale-1.3-l2-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| euryale-1.3-l2-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `euryale-1.3-l2-70b.Q6_K.gguf-split-a`
* `euryale-1.3-l2-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `euryale-1.3-l2-70b.Q8_0.gguf-split-a`
* `euryale-1.3-l2-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat euryale-1.3-l2-70b.Q6_K.gguf-split-* > euryale-1.3-l2-70b.Q6_K.gguf && rm euryale-1.3-l2-70b.Q6_K.gguf-split-*
cat euryale-1.3-l2-70b.Q8_0.gguf-split-* > euryale-1.3-l2-70b.Q8_0.gguf && rm euryale-1.3-l2-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B euryale-1.3-l2-70b.Q6_K.gguf-split-a + euryale-1.3-l2-70b.Q6_K.gguf-split-b euryale-1.3-l2-70b.Q6_K.gguf
del euryale-1.3-l2-70b.Q6_K.gguf-split-a euryale-1.3-l2-70b.Q6_K.gguf-split-b
COPY /B euryale-1.3-l2-70b.Q8_0.gguf-split-a + euryale-1.3-l2-70b.Q8_0.gguf-split-b euryale-1.3-l2-70b.Q8_0.gguf
del euryale-1.3-l2-70b.Q8_0.gguf-split-a euryale-1.3-l2-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Euryale-1.3-L2-70B-GGUF and below it, a specific filename to download, such as: euryale-1.3-l2-70b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Euryale-1.3-L2-70B-GGUF euryale-1.3-l2-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Euryale-1.3-L2-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Euryale-1.3-L2-70B-GGUF euryale-1.3-l2-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m euryale-1.3-l2-70b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Euryale-1.3-L2-70B-GGUF", model_file="euryale-1.3-l2-70b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Saofiq's Euryale 1.3 L2 70B

17th Attempt. Past 10 Failed, cost me >$200 lol.
Idea is an updated version of Euryale with ReMantik instead of the ties-merge between the original 3 models.
This is then mixed with a saucy model with a Mythomax-esque Ratio, and a certain experimental (self) LoRA applied to it.
Test Results: Works Well.
<br>NSFL and NSFW fine in roleplay context.
<br>slight censor with 0 context, zero issues in actual RP / ERP.
<br>Good Prose, Not Dumbed Down due to RP merges from testing.
<br> I have not encountered any repetition issues some had with the original Euryale. tell me if you do, though.
Prompt and System Format:
most works well. I recommend Alpaca.
ST Settings used for Test:
Lightning 1.1 System Prompt + Shortwave(1.2 Temperature)
Support me [here](https://ko-fi.com/sao10k) :)
<!-- original-model-card end -->
|
alon-albalak/ODM_1Bparams_50Btokens_seqlen2048 | alon-albalak | 2023-10-26T23:54:23Z | 438 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"feature-extraction",
"en",
"license:openrail",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| feature-extraction | 2023-10-26T23:25:53Z | ---
license: openrail
language:
- en
---
This is a GPT-NeoX model trained on 50 billion tokens from The Pile, using the Online Data Mixing method.
The OpenLLM leaderboard won't let me submit my model because the description is too short, so I'm adding more characters to the description in hopes that it will be evaluated. |
BroTeoTM/TNv | BroTeoTM | 2023-12-14T06:46:15Z | 438 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
]
| text-to-image | 2023-12-10T00:59:47Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 3354039744, Size: 730x382,
Model hash: 700528894b, Model: juggernautXL_version2, Denoising strength:
0.75, Mask blur: 4, Version: 1.6.0
output:
url: images/00167-3354039744.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: face
---
# TNv
<Gallery />
## Trigger words
You should use `face` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/BroTeoTM/TNv/tree/main) them in the Files & versions tab.
|
Undi95/Mixtral-8x7B-MoE-RP-Story-GGUF | Undi95 | 2023-12-15T06:20:48Z | 438 | 10 | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2023-12-14T04:46:23Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
Mixtral-8x7B-MoE-RP-Story is a model made primarely for chatting, RP (Roleplay) and storywriting.
2 RP model, 2 chat model, 1 occult model, 1 storywritting model, 1 mathematic model and 1 DPO model was used for a MoE. Bagel was the base.
The DPO chat model is here to help get more human reply.
This is my first try at doing this, so don't hesitate to give feedback!
WARNING: ALL THE "K" GGUF QUANT OF MIXTRAL MODELS SEEMS TO BE [BROKEN](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TvjEP14ps7ZUgJ-0-mhIX.png), PREFER Q4_0, Q5_0 or Q8_0!
<!-- description start -->
## Description
This repo contains quanntized files of Mixtral-8x7B-MoE-RP-Story.
<!-- description end -->
<!-- description start -->
## Models used
The list of model used and their activator/theme can be found [here](https://huggingface.co/Undi95/Mixtral-8x7B-MoE-RP-Story/blob/main/config.yaml)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom
Using Bagel as a base let us a lot of different prompting system theorically, you can see all the prompting available [here](https://huggingface.co/jondurbin/bagel-7b-v0.1#prompt-formatting).
If you want to support me, you can [here](https://ko-fi.com/undiai). |
TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF | TheBloke | 2023-12-25T08:57:01Z | 438 | 9 | transformers | [
"transformers",
"gguf",
"mixtral",
"mistral",
"finetune",
"dpo",
"Instruct",
"augmentation",
"german",
"text-generation",
"en",
"de",
"fr",
"it",
"es",
"dataset:argilla/distilabel-math-preference-dpo",
"base_model:VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-25T08:44:52Z | ---
base_model: VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
datasets:
- argilla/distilabel-math-preference-dpo
inference: false
language:
- en
- de
- fr
- it
- es
library_name: transformers
license: apache-2.0
model_creator: VAGO solutions
model_name: SauerkrautLM Mixtral 8X7B Instruct
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '[INST] {prompt} [/INST]
'
quantized_by: TheBloke
tags:
- mistral
- finetune
- dpo
- Instruct
- augmentation
- german
- mixtral
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM Mixtral 8X7B Instruct - GGUF
- Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions)
- Original model: [SauerkrautLM Mixtral 8X7B Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [VAGO solutions's SauerkrautLM Mixtral 8X7B Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF)
* [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sauerkrautlm-mixtral-8x7b-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes |
| [sauerkrautlm-mixtral-8x7b-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss |
| [sauerkrautlm-mixtral-8x7b-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended |
| [sauerkrautlm-mixtral-8x7b-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sauerkrautlm-mixtral-8x7b-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended |
| [sauerkrautlm-mixtral-8x7b-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
| [sauerkrautlm-mixtral-8x7b-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF/blob/main/sauerkrautlm-mixtral-8x7b-instruct.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF and below it, a specific filename to download, such as: sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] {prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: VAGO solutions's SauerkrautLM Mixtral 8X7B Instruct

## VAGO solutions SauerkrautLM-Mixtral-8x7B-Instruct
Introducing **SauerkrautLM-Mixtral-8x7B-Instruct** – our Sauerkraut version of the powerful Mixtral-8x7B-Instruct!
Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
- [Data Contamination Test](#data-contamination-test-results)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Mixtral Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | coming soon | coming soon | coming soon |
| SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Mixtral-8x7B-Instruct**
- **Model Type:** SauerkrautLM-Mixtral-8x7B-Instruct-v0.1 is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Language(s):** English, German, French, Italian, Spanish
- **License:** APACHE 2.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected])
### Training Dataset:
SauerkrautLM-Mixtral-8x7B-Instruct was trained with mix of German data augmentation and translated data.
Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
### Data Contamination Test Results
Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in.
We checked our SauerkrautLM-DPO dataset with a special test [1] on a smaller model for this problem.
The HuggingFace team used the same methods [2, 3].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.*
| Dataset | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 |
[1] https://github.com/swj0419/detect-pretrain-code-contamination
[2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
### Prompt Template:
```
[INST] Instruction [/INST] Model answer [INST] Follow-up instruction [/INST]
```
## Evaluation

*evaluated with lm-evaluation-harness v0.3.0 - mmlu coming soon
*All benchmarks were performed with a sliding window of 4096. New Benchmarks with Sliding Window null coming soon
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
## Acknowledgement
Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
<!-- original-model-card end -->
|
h4rz3rk4s3/TinyNewsLlama-1.1B | h4rz3rk4s3 | 2024-03-17T16:01:00Z | 438 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"TinyLlama",
"QLoRA",
"Politics",
"News",
"sft",
"conversational",
"en",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-13T15:35:44Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- TinyLlama
- QLoRA
- Politics
- News
- sft
language:
- en
pipeline_tag: text-generation
---
# UPDATE March, 17th: Changed quantization for the merge of the adapter and the original model.
# TinyNewsLlama-1.1B
TinyNewsLlama-1.1B is a QLoRA SFT fine-tune of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using a sample of a concentrated version of the [bigNews] (https://paperswithcode.com/dataset/bignews) Dataset. The model was fine-tuned for ~12h on one A100 40GB on ~125M tokens.
The goal of this project is to study the potential for improving the domain-specific (in this case political) knowledge of small (<3B) LLMs by concentrating the training datasets TF-IDF in respect to the underlying Topics found in the origianl Dataset.
The used training data contains political news articles from **The New York Times**, **USA Today** and **The Washington Times**. The concentrated BigNews Dataset as well as more information about the used sample will soon be added.
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "h4rz3rk4s3/TinyNewsLlama-1.1B"
messages = [
{
"role": "system",
"content": "You are a an experienced journalist.",
},
{"role": "user", "content": "Write a short article on Brexit and it's impact on the European Union."},
]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Kukedlc/NeuralMaxime-7B-DPO | Kukedlc | 2024-02-19T15:04:09Z | 438 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"conversational",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-19T04:40:38Z | ---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
tags:
- code
---
# NeuralMaxime 7b DPO

## DPO Intel - Orca
## Merge - MergeKit
## Models : NeuralMonarch & AlphaMonarch (MLabonne)
|
stevez80/ErebusNeuralSamir-7B-dare-ties | stevez80 | 2024-03-10T01:22:38Z | 438 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"samir-fama/SamirGPT-v1",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"KoboldAI/Mistral-7B-Erebus-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-09T02:14:58Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- samir-fama/SamirGPT-v1
- mlabonne/NeuralHermes-2.5-Mistral-7B
- KoboldAI/Mistral-7B-Erebus-v3
---
# ErebusNeuralSamir-7B-dare-ties
ErebusNeuralSamir-7B-dare-ties is a merge of the following models using mergekit:
* samir-fama/SamirGPT-v1
* mlabonne/NeuralHermes-2.5-Mistral-7B
* KoboldAI/Mistral-7B-Erebus-v3
## 🧩 Configuration
\\\```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.3
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.53
weight: 0.3
- model: KoboldAI/Mistral-7B-Erebus-v3
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
\\\``` |
alokabhishek/Meta-Llama-3-8B-Instruct-bnb-8bit | alokabhishek | 2024-04-26T07:38:32Z | 438 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"8bit",
"bnb",
"bitsandbytes",
"llama-3",
"facebook",
"meta",
"8b",
"quantized",
"conversational",
"arxiv:2305.14314",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| text-generation | 2024-04-25T23:34:10Z | ---
library_name: transformers
tags:
- 8bit
- bnb
- bitsandbytes
- llama
- llama-3
- facebook
- meta
- 8b
- quantized
license: other
license_name: llama3
license_link: LICENSE
pipeline_tag: text-generation
---
# Model Card for alokabhishek/Meta-Llama-3-8B-Instruct-bnb-8bit
<!-- Provide a quick summary of what the model is/does. -->
This repo contains 8-bit quantized (using bitsandbytes) model of Meta's Meta-Llama-3-8B-Instruct
## Model Details
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### About 8 bit quantization using bitsandbytes
- QLoRA: Efficient Finetuning of Quantized LLMs: [arXiv - QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)
- Hugging Face Blog post on 8-bit quantization using bitsandbytes: [A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co/blog/hf-bitsandbytes-integration)
- bitsandbytes github repo: [bitsandbytes github repo](https://github.com/TimDettmers/bitsandbytes)
# How to Get Started with the Model
Use the code below to get started with the model.
## How to run from Python code
#### Use a pipeline as a high-level helper
```python
import transformers
import torch
model_id = "alokabhishek/Meta-Llama-3-8B-Instruct-bnb-8bit"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
prompt_instruction = "You are a virtual assistant with advanced expertise in a broad spectrum of topics, equipped to utilize high-level critical thinking, cognitive skills, creativity, and innovation. Your goal is to deliver the most straightforward and accurate answer possible for each question, ensuring high-quality and useful responses for the user. "
user_prompt = "Why is Hulk always angry?"
chat_messages = [
{"role": "system", "content": str(prompt_instruction)},
{"role": "user", "content": str(user_prompt)},
]
prompt = pipeline.tokenizer.apply_chat_template(
chat_messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
output = pipeline(
prompt,
do_sample=True,
max_new_tokens=1024,
temperature=1,
top_k=50,
top_p=1,
num_return_sequences=1,
pad_token_id=text_generation_pipeline.tokenizer.pad_token_id,
eos_token_id=terminators,
)
print(output[0]["generated_text"][len(prompt):])
```
## Meta Llama 3 Original Model Card:
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
nomic-ai/nomic-embed-vision-v1 | nomic-ai | 2024-06-11T00:37:44Z | 438 | 12 | transformers | [
"transformers",
"onnx",
"safetensors",
"nomic_bert",
"feature-extraction",
"image-feature-extraction",
"custom_code",
"en",
"arxiv:2111.07991",
"license:cc-by-nc-4.0",
"region:us"
]
| image-feature-extraction | 2024-05-13T17:09:02Z | ---
library_name: transformers
language:
- en
pipeline_tag: image-feature-extraction
license: cc-by-nc-4.0
inference: false
---
# nomic-embed-vision-v1: Expanding the Latent Space
`nomic-embed-vision-v1` is a high performing vision embedding model that shares the same embedding space as [nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1).
All Nomic Embed Text models are now **multimodal**!
| Name | Imagenet 0-shot | Datacomp (Avg. 38) | MTEB |
| :-------------------------------:| :-------------- | :----------------- | :------: |
| `nomic-embed-vision-v1.5` | **71.0** | **56.8** | 62.28 |
| `nomic-embed-vision-v1` | 70.7 | 56.7 | **62.39** |
| OpenAI CLIP ViT B/16 | 68.3 | 56.3 | 43.82 |
| Jina CLIP v1 | 59.1 | 52.2 | 60.1 |
## Hosted Inference API
The easiest way to get started with Nomic Embed is through the Nomic Embedding API.
Generating embeddings with the `nomic` Python client is as easy as
```python
from nomic import embed
import numpy as np
output = embed.image(
images=[
"image_path_1.jpeg",
"image_path_2.png",
],
model='nomic-embed-vision-v1',
)
print(output['usage'])
embeddings = np.array(output['embeddings'])
print(embeddings.shape)
```
For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-vision)
## Data Visualization
Click the Nomic Atlas map below to visualize a 100,000 sample CC3M comparing the Vision and Text Embedding Space!
[](https://atlas.nomic.ai/data/nomic-multimodal-series/cc3m-100k-image-bytes-v15/map)
## Training Details
We align our vision embedder to the text embedding by employing a technique similar to [LiT](https://arxiv.org/abs/2111.07991) but instead lock the text embedder!
For more details, see the Nomic Embed Vision Technical Report (soon to be released!) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-vision)
Training code is released in the `contrastors` [repository](https://github.com/nomic-ai/contrastors)
## Usage
Remember `nomic-embed-text` *requires* prefixes and so, when using Nomic Embed in multimodal RAG scenarios (e.g. text to image retrieval),
you should use the `search_query: ` prefix.
### Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor
from PIL import Image
import requests
processor = AutoImageProcessor.from_pretrained("nomic-ai/nomic-embed-vision-v1")
vision_model = AutoModel.from_pretrained("nomic-ai/nomic-embed-vision-v1", trust_remote_code=True)
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(image, return_tensors="pt")
img_emb = vision_model(**inputs).last_hidden_state
img_embeddings = F.normalize(img_emb[:, 0], p=2, dim=1)
```
Additionally, you can perform multimodal retrieval!
```python
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['search_query: What are cute animals to cuddle with?', 'search_query: What do cats look like?']
tokenizer = AutoTokenizer.from_pretrained('nomic-ai/nomic-embed-text-v1')
text_model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True)
text_model.eval()
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = text_model(**encoded_input)
text_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
text_embeddings = F.normalize(text_embeddings, p=2, dim=1)
print(torch.matmul(img_embeddings, text_embeddings.T))
```
# Join the Nomic Community
- Nomic: [https://nomic.ai](https://nomic.ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
|
mattyamonaca/controlnet_line2line_xl | mattyamonaca | 2024-05-18T20:59:52Z | 438 | 6 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-18T20:56:55Z | ---
license: apache-2.0
---
|
mradermacher/Jais-70b-GGUF | mradermacher | 2024-06-08T18:12:58Z | 438 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:fne/Jais-70b",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-07T17:08:39Z | ---
base_model: fne/Jais-70b
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/airev-ai/Jais-70b/blob/main/LICENSE
license_name: airev-model-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/fne/Jais-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Jais-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Jais-70b-GGUF/resolve/main/Jais-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Felladrin/gguf-sharded-WizardVicuna-pythia-410m-deduped | Felladrin | 2024-06-07T20:01:05Z | 438 | 0 | null | [
"gguf",
"base_model:heegyu/WizardVicuna-pythia-410m-deduped",
"region:us"
]
| null | 2024-06-07T19:54:14Z | ---
base_model: heegyu/WizardVicuna-pythia-410m-deduped
---
Sharded GGUF version of [heegyu/WizardVicuna-pythia-410m-deduped](https://huggingface.co/heegyu/WizardVicuna-pythia-410m-deduped).
|
mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF | mradermacher | 2024-06-15T04:57:48Z | 438 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"DPO",
"RL-TUNED",
"en",
"base_model:cloudyu/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-13T12:23:12Z | ---
base_model: cloudyu/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- moe
- DPO
- RL-TUNED
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cloudyu/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q2_K.gguf) | Q2_K | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.IQ3_XS.gguf) | IQ3_XS | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q3_K_S.gguf) | Q3_K_S | 26.4 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.IQ3_S.gguf) | IQ3_S | 26.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.IQ3_M.gguf) | IQ3_M | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q3_K_M.gguf) | Q3_K_M | 29.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q3_K_L.gguf) | Q3_K_L | 31.9 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.IQ4_XS.gguf) | IQ4_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q4_K_S.gguf) | Q4_K_S | 34.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q4_K_M.gguf) | Q4_K_M | 36.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q5_K_S.gguf) | Q5_K_S | 42.0 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q5_K_M.gguf) | Q5_K_M | 43.2 | |
| [GGUF](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q6_K.gguf) | Q6_K | 50.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B-GGUF/resolve/main/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B.Q8_0.gguf.part2of2) | Q8_0 | 64.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.