modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
jeiku/Orthocopter_8B | jeiku | 2024-05-10T20:04:15Z | 738 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-10T04:31:15Z | ---
license: apache-2.0
language:
- en
---
# Orthocopter

This model is thanks to the hard work of lucyknada with the Edgerunners. Her work produced the following model, which I used as the base:
https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
I then applied two handwritten datasets over top of this and the results are pretty nice, with no refusals and plenty of personality. |
neopolita/mistral-7b-v0.3-gguf | neopolita | 2024-06-02T01:57:41Z | 738 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-02T01:30:19Z | ---
{}
---
# GGUF quants for [**mistralai/Mistral-7B-v0.3**](https://huggingface.co/mistralai/Mistral-7B-v0.3) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/mistralai/Mistral-7B-v0.3)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF | mradermacher | 2024-06-11T03:39:44Z | 738 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Stheno-Mega-False-49B-L2",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T11:53:51Z | ---
base_model: Sao10K/Stheno-Mega-False-49B-L2
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Stheno-Mega-False-49B-L2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ1_S.gguf) | i1-IQ1_S | 10.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ1_M.gguf) | i1-IQ1_M | 11.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ2_S.gguf) | i1-IQ2_S | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ2_M.gguf) | i1-IQ2_M | 17.1 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q2_K.gguf) | i1-Q2_K | 18.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 20.3 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ3_S.gguf) | i1-IQ3_S | 21.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 21.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ3_M.gguf) | i1-IQ3_M | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 24.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 26.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 26.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q4_0.gguf) | i1-Q4_0 | 28.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 28.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 29.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 34.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 35.1 | |
| [GGUF](https://huggingface.co/mradermacher/Stheno-Mega-False-49B-L2-i1-GGUF/resolve/main/Stheno-Mega-False-49B-L2.i1-Q6_K.gguf) | i1-Q6_K | 40.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
moetezsa/mistral_numericnlg_FV_gguf | moetezsa | 2024-06-27T13:34:53Z | 738 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-06-27T13:18:09Z | Entry not found |
yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_M-GGUF | yh-yao | 2024-07-01T21:45:56Z | 738 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2024-07-01T21:45:38Z | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo yh-yao/Meta-Llama-3-8B-Instruct-Q3_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q3_k_m.gguf -c 2048
```
|
huggingartists/coldplay | huggingartists | 2022-07-15T17:48:38Z | 737 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/coldplay",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/coldplay
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/6cfcc2b1425286fe0d0b8c857c895b63.600x338x200.gif')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coldplay</div>
<a href="https://genius.com/artists/coldplay">
<div style="text-align: center; font-size: 14px;">@coldplay</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Coldplay.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/coldplay).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/coldplay")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/34tqcy7u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Coldplay's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/23h7o09h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/23h7o09h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/coldplay')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/coldplay")
model = AutoModelWithLMHead.from_pretrained("huggingartists/coldplay")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
lorahub/flan_t5_large-duorc_SelfRC_title_generation | lorahub | 2023-07-24T09:38:52Z | 737 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-24T09:38:42Z | ---
library_name: peft
---
|
llmware/bling-sheared-llama-2.7b-0.1 | llmware | 2024-02-13T08:58:27Z | 737 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-22T17:03:52Z | ---
license: apache-2.0
inference: false
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
llmware/bling-sheared-llama-2.7b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, RAG-instruct trained on top of a Sheared-LLaMA-2.7B base model.
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with
the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even
without using any advanced quantization optimizations.
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **90.25** correct out of 100
--Not Found Classification: 60.0%
--Boolean: 80.0%
--Math/Logic: 50.0%
--Complex Questions (1-5): 2 (Low-Medium)
--Summarization Quality (1-5): 3 (Coherent, extractive)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Instruct-trained decoder
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** princeton-nlp/Sheared-LLaMA-2.7B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a
proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.
2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose
automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1-3B parameter GPT model.
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without
having to send sensitive information over an Internet-based API.
The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-2.7b-0.1")
model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-2.7b-0.1")
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1. Text Passage Context, and
2. Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
inputs = tokenizer(new_prompt, return_tensors="pt")
start_of_output = len(inputs.input_ids[0])
# temperature: set at 0.3 for consistency of output
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
outputs = model.generate(
inputs.input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100,
)
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
# note: due to artifact of the fine-tuning, use this post-processing with HF generation
eot = output_only.find("<|endoftext|>")
if eot > -1:
output_only = output_only[:eot]
## Citation [optional]
This BLING model was built on top of a Sheared Llama model base - for more information about the Sheared Llama model, please see the paper referenced below:
@article{xia2023sheared,
title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
author={Xia, Mengzhou and Gao, Tianyu, and Zeng Zhiyuan, and Chen Danqi},
year={2023}
}
## Model Card Contact
Darren Oberst & llmware team
|
sumo43/Yi-34b-x2 | sumo43 | 2024-01-15T07:37:04Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:jondurbin/bagel-dpo-34b-v0.2",
"base_model:one-man-army/UNA-34Beagles-32K-bf16-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-15T05:55:13Z | ---
base_model:
- jondurbin/bagel-dpo-34b-v0.2
- one-man-army/UNA-34Beagles-32K-bf16-v1
tags:
- mergekit
- merge
license: mit
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
* [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: jondurbin/bagel-dpo-34b-v0.2
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 60]
model: jondurbin/bagel-dpo-34b-v0.2
- layer_range: [0, 60]
model: one-man-army/UNA-34Beagles-32K-bf16-v1
``` |
vicgalle/solarized-13B-dpo | vicgalle | 2024-03-04T12:11:09Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"13B",
"merge",
"conversational",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:vicgalle/SOLAR-13B-Instruct-v1.0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-20T09:06:16Z | ---
license: apache-2.0
tags:
- dpo
- 13B
- merge
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
base_model:
- vicgalle/SOLAR-13B-Instruct-v1.0
model-index:
- name: solarized-13B-dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/solarized-13B-dpo
name: Open LLM Leaderboard
---
# solarized-13B-dpo
DPO'd from vicgalle/SOLAR-13B-Instruct-v1.0, a SOLAR-like model upscaled to 13B.
It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct. Then, we applied DPO over a high-quality preference dataset.

## Sample generations
**Generate a movie review about the Pokémon 1 movie, focusing on BDSM.**
> In an unexpected turn of events, the cinematic adaptation of the "Pokémon: The First Movie" delves into BDSM themes despite it being primarily focused on family entertainment. This film takes viewers to a journey where Ash Ketchum, Brock, and Misty encounter Mewtwo—a powerful genetically-engineered Pokémon.
The storyline's subtle inclinations towards bondage, dominance, and submission manifest through the character of Mewtwo, which seeks to overthrow its human-made oppressors. This concept finds parallels in BDSM, as Mewtwo becomes a metaphor for an individual embracing their subversive nature while resisting societal norms imposed by dominant figures within that society.
The movie's visual elements and rich color palette create a perfect canvas to showcase the darker, sensual undertones. Scenes of Pokémon battles represent the dominance and submission found in BDSM relationships, with trainers portrayed as authoritative figures wielding control over their creatures.
The climax of the film presents an exploration of the struggle between humanity and nature, symbolizing the need for balance and respect within a BDSM lifestyle. The final battle between Mewtwo and its clones, accompanied by a powerful soundtrack, delves into themes of acceptance, freedom, and ultimately finding one's place in society.
The film's incorporation of BDSM elements might come as a surprise to viewers, but it adds depth to the narrative and enriches the overall cinematic experience. While still an animated family flick based on the popular franchise, "Pokémon: The First Movie" succeeds in weaving threads of BDSM into its story, making it a noteworthy addition within the sub-genre of anime adaptations.
In conclusion, this groundbreaking adaptation of Pokémon's first movie demonstrates how a seemingly innocent family film can hold provocative and thought-provoking messages, exploring themes of power dynamics, identity, and self-acceptance through the lens of BDSM. [end of text]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__solarized-13B-dpo)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.05|
|AI2 Reasoning Challenge (25-Shot)|62.71|
|HellaSwag (10-Shot) |81.82|
|MMLU (5-Shot) |59.12|
|TruthfulQA (0-shot) |66.25|
|Winogrande (5-shot) |76.01|
|GSM8k (5-shot) |26.38|
|
mlabonne/Darewin-7B | mlabonne | 2024-03-04T15:17:37Z | 737 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:openaccess-ai-collective/DPOpenHermes-7B-v2",
"base_model:fblgit/una-cybertron-7b-v2-bf16",
"base_model:openchat/openchat-3.5-0106",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-23T00:49:44Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- Intel/neural-chat-7b-v3-3
- openaccess-ai-collective/DPOpenHermes-7B-v2
- fblgit/una-cybertron-7b-v2-bf16
- openchat/openchat-3.5-0106
- OpenPipe/mistral-ft-optimized-1227
- mlabonne/NeuralHermes-2.5-Mistral-7B
model-index:
- name: Darewin-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.38
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Darewin-7B
name: Open LLM Leaderboard
---
# Darewin-7B
Darewin-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
* [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2)
* [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: Intel/neural-chat-7b-v3-3
parameters:
density: 0.6
weight: 0.2
- model: openaccess-ai-collective/DPOpenHermes-7B-v2
parameters:
density: 0.6
weight: 0.1
- model: fblgit/una-cybertron-7b-v2-bf16
parameters:
density: 0.6
weight: 0.2
- model: openchat/openchat-3.5-0106
parameters:
density: 0.6
weight: 0.15
- model: OpenPipe/mistral-ft-optimized-1227
parameters:
density: 0.6
weight: 0.25
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.1
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Darewin-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Darewin-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.87|
|AI2 Reasoning Challenge (25-Shot)|68.60|
|HellaSwag (10-Shot) |86.22|
|MMLU (5-Shot) |65.21|
|TruthfulQA (0-shot) |60.38|
|Winogrande (5-shot) |79.79|
|GSM8k (5-shot) |71.04|
|
kwchoi/DPO_mistral_7b_ultra_0124_v1 | kwchoi | 2024-03-06T01:45:13Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T00:21:32Z | ---
language:
- en
license: apache-2.0
model-index:
- name: DPO_mistral_7b_ultra_0124_v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_7b_ultra_0124_v1
name: Open LLM Leaderboard
---
Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kwchoi__DPO_mistral_7b_ultra_0124_v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.45|
|AI2 Reasoning Challenge (25-Shot)|66.13|
|HellaSwag (10-Shot) |86.39|
|MMLU (5-Shot) |59.78|
|TruthfulQA (0-shot) |69.45|
|Winogrande (5-shot) |79.48|
|GSM8k (5-shot) |25.47|
|
wang7776/vicuna-7b-v1.3-attention-sparsity-30 | wang7776 | 2024-02-05T18:22:14Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2306.11695",
"arxiv:2302.13971",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-26T20:55:25Z | ---
inference: false
license: apache-2.0
---
# Overview
This model has been pruned to 30% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/lmsys/vicuna-7b-v1.3).
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) |
cognitivecomputations/openchat-3.5-0106-laser | cognitivecomputations | 2024-01-30T20:48:02Z | 737 | 18 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T03:19:29Z | ---
license: apache-2.0
---
by David, Fernando and Eric
Sponsored by: [VAGO Solutions](https://vago-solutions.de) and [HyperSpace.Ai](https://hyperspace.computer/)
Join our Discord! https://discord.gg/cognitivecomputations
A laser version of [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT
Here, we are controlling layers checking which ones have lower signal to noise ratios (which are more subject to noise), to apply Laser interventions, still using Machenko Pastur to calculate this ratio.
We intend to be the first of a family of experimentations being carried out @ Cognitive Computations.
|
eren23/NeuralDareBeagle-7B-slerp | eren23 | 2024-03-05T15:45:54Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"mlabonne/DareBeagle-7B-v2",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:mlabonne/DareBeagle-7B-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T15:55:06Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/DareBeagle-7B-v2
base_model:
- mlabonne/NeuralBeagle14-7B
- mlabonne/DareBeagle-7B-v2
model-index:
- name: NeuralDareBeagle-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/NeuralDareBeagle-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/NeuralDareBeagle-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/NeuralDareBeagle-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/NeuralDareBeagle-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/NeuralDareBeagle-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/NeuralDareBeagle-7B-slerp
name: Open LLM Leaderboard
---
# NeuralDareBeagle-7B-slerp
NeuralDareBeagle-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [mlabonne/DareBeagle-7B-v2](https://huggingface.co/mlabonne/DareBeagle-7B-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: mlabonne/DareBeagle-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/DareBeagle-7B-v2
parameters:
t:
- filter: self_attn
value: [0.5, 0.7, 0.3, 0.7, 1]
- filter: mlp
value: [0.5, 0.3, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "eren23/NeuralDareBeagle-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__NeuralDareBeagle-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.60|
|AI2 Reasoning Challenge (25-Shot)|72.10|
|HellaSwag (10-Shot) |88.20|
|MMLU (5-Shot) |64.99|
|TruthfulQA (0-shot) |69.18|
|Winogrande (5-shot) |82.56|
|GSM8k (5-shot) |70.58|
|
tourist800/Mistral-7B-Merge-14-v0.2 | tourist800 | 2024-01-28T20:51:47Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"amazon/MistralLite",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T20:48:04Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1
- amazon/MistralLite
---
# Mistral-7B-Merge-14-v0.2
Mistral-7B-Merge-14-v0.2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
* [amazon/MistralLite](https://huggingface.co/amazon/MistralLite)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
layer_range: [0, 32]
- model: amazon/MistralLite
layer_range: [0, 32]
merge_method: slerp
base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
CultriX/Wernicke-7B-v8 | CultriX | 2024-01-29T01:16:11Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"kaitchup/Mayonnaise-4in1-022",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"vanillaOVO/supermario_v2",
"FelixChao/WestSeverus-7B-DPO-v2",
"base_model:kaitchup/Mayonnaise-4in1-022",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:vanillaOVO/supermario_v2",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T22:06:13Z | ---
tags:
- merge
- mergekit
- lazymergekit
- kaitchup/Mayonnaise-4in1-022
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- vanillaOVO/supermario_v2
- FelixChao/WestSeverus-7B-DPO-v2
base_model:
- kaitchup/Mayonnaise-4in1-022
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- vanillaOVO/supermario_v2
- FelixChao/WestSeverus-7B-DPO-v2
license: apache-2.0
---
# Wernicke-7B-v8
Wernicke-7B-v8 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [kaitchup/Mayonnaise-4in1-022](https://huggingface.co/kaitchup/Mayonnaise-4in1-022)
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [vanillaOVO/supermario_v2](https://huggingface.co/vanillaOVO/supermario_v2)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
## 🧩 Configuration
```yaml
models:
- model: CultriX/Wernicke-7B-v1
# No parameters necessary for base model
- model: kaitchup/Mayonnaise-4in1-022
parameters:
density: 0.53
weight: 0.40
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
density: 0.53
weight: 0.25
- model: vanillaOVO/supermario_v2
parameters:
density: 0.53
weight: 0.25
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.53
weight: 0.20
merge_method: dare_ties
base_model: CultriX/Wernicke-7B-v1
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/Wernicke-7B-v8"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Gille/StrangeMerges_7-7B-slerp | Gille | 2024-03-07T07:04:17Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_6-7B-dare_ties",
"berkeley-nest/Starling-LM-7B-alpha",
"base_model:Gille/StrangeMerges_6-7B-dare_ties",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T22:37:18Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_6-7B-dare_ties
- berkeley-nest/Starling-LM-7B-alpha
base_model:
- Gille/StrangeMerges_6-7B-dare_ties
- berkeley-nest/Starling-LM-7B-alpha
---
# StrangeMerges_7-7B-slerp
StrangeMerges_7-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_6-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_6-7B-dare_ties)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_6-7B-dare_ties
layer_range: [0, 32]
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_6-7B-dare_ties
parameters:
t:
- filter: self_attn
value: [0.9, 0.5, 0.3, 0.7, 0.1]
- filter: mlp
value: [0.1, 0.5, 0.7, 0.3, 0.9]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_7-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
RatanRohith/NeuralPizza-7B-V0.3 | RatanRohith | 2024-01-29T15:51:46Z | 737 | 0 | Transformers | [
"Transformers",
"safetensors",
"mistral",
"text-generation",
"transformers",
"fine-tuned",
"language-modeling",
"direct-preference-optimization",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-01-29T15:43:32Z | ---
library_name: Transformers
tags:
- transformers
- fine-tuned
- language-modeling
- direct-preference-optimization
datasets:
- Intel/orca_dpo_pairs
license: apache-2.0
---
## Model Description
NeuralPizza-7B-V0.3 is a fine-tuned version of the RatanRohith/NeuralPizza-7B-V0.1 model, specialized through Direct Preference Optimization (DPO). It was fine-tuned using the argilla/distilabel-intel-orca-dpo-pairs dataset, focusing on enhancing model performance based on preference comparisons.
## Intended Use
This model is primarily intended for research and experimental applications in language modeling, especially for exploring the Direct Preference Optimization method. It provides insights into the nuances of DPO in the context of language model tuning.
## Training Data
The model was fine-tuned using the argilla/distilabel-intel-orca-dpo-pairs dataset. This dataset is designed for applying and testing Direct Preference Optimization techniques in language models.
## Training Procedure
The training followed the guidelines and methodologies outlined in the "Fine-Tune a Mistral 7B Model with Direct Preference Optimization" guide from Medium's Towards Data Science platform. Specific training regimes and hyperparameters are based on this guide. Here : https://medium.com/towards-data-science/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
## Limitations and Bias
As an experimental model, it may carry biases inherent from its training data. The model's performance and outputs should be critically evaluated, especially in sensitive and diverse applications. |
jpechg/Sour-Marcoro-12.5B | jpechg | 2024-01-31T22:27:40Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:mlabonne/Marcoro14-7B-slerp",
"base_model:Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T22:10:00Z | ---
base_model:
- mlabonne/Marcoro14-7B-slerp
- Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
tags:
- mergekit
- merge
license: apache-2.0
---
# models
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)
* [Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
layer_range: [0, 32]
- sources:
- model: mlabonne/Marcoro14-7B-slerp
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
``` |
TeeZee/BigMaid-20B-v1.0 | TeeZee | 2024-03-04T14:34:30Z | 737 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"roleplay",
"text-generation-inference",
"merge",
"not-for-all-audiences",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-02T19:45:09Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- roleplay
- text-generation-inference
- merge
- not-for-all-audiences
model-index:
- name: BigMaid-20B-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0
name: Open LLM Leaderboard
---
# BigMaid-20B-v1.0

## Model Details
- A result of interleaving layers of [KatyTheCutie/EstopianMaid-13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B) with itself.
- The resulting model has approximately 20 billion parameters.
- See [mergekit-config.yml](https://huggingface.co/TeeZee/BigMaid-20B-v1.0/resolve/main/mergekit-config.yml) for details on the merge method used.
**Warning: This model can produce NSFW content!**
## Results
- Bigger version of original, uncensored like oryginal.
- Retains all good qualities of original with additional affinity for abstract and lighthearted humor
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__BigMaid-20B-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.07|
|AI2 Reasoning Challenge (25-Shot)|61.35|
|HellaSwag (10-Shot) |85.26|
|MMLU (5-Shot) |57.15|
|TruthfulQA (0-shot) |55.29|
|Winogrande (5-shot) |75.30|
|GSM8k (5-shot) | 2.05|
|
alchemonaut/QuartetAnemoi-70B-t0.0001 | alchemonaut | 2024-03-07T09:23:28Z | 737 | 30 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T23:19:52Z | ---
tags:
- merge
license: other
model-index:
- name: QuartetAnemoi-70B-t0.0001
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001/resolve/main/anemoi.png>
# QuartetAnemoi-70B-t0.0001
A sequential merge using a custom algorithm (NearSwap) of:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2)
- [Aurora-Nights-70B-v1.0](https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0)
- [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1)
<br/>
In our testing, this model seems like a storyteller, as might be expected, but the changes from this merge are extremely soft. We were impressed that, unlike most models, at the end of a story it did not often use cliches such as "In the end", "And so", "beacon of hope", etc.
<br/>
<br/>
# Quants
Most of the popular quant formats are available now, thanks to community efforts.
| Type | Misc | Author |
| ----- | ----- | ----- |
| [GGUF](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001-GGUF/tree/main) | | alchemonaut |
| [GGUF](https://huggingface.co/Nexesenex/alchemonaut_QuartetAnemoi-70B-iMat.GGUF) | iMat | Nexesenex |
| [GGUF](https://huggingface.co/mradermacher/QuartetAnemoi-70B-t0.0001-i1-GGUF) | iMat | mradermacher |
| [GGUF](https://huggingface.co/mradermacher/QuartetAnemoi-70B-t0.0001-GGUF) | Full Set | mradermacher |
| [exl2](https://huggingface.co/llmixer/QuartetAnemoi-70B-t0.0001-2.5bpw-h6-exl2) | 2.5bpw | llmixer |
| [exl2](https://huggingface.co/altomek/QuartetAnemoi-70B-t0.0001-3.75bpw-EXL2) | 3.75bpw | altomek |
| [exl2](https://huggingface.co/llmixer/QuartetAnemoi-70B-t0.0001-4bpw-h6-exl2) | 4.0bpw| llmixer |
| [exl2](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001-b4.6-h8-exl2) | 4.6bpw| alchemonaut |
| [exl2](https://huggingface.co/llmixer/QuartetAnemoi-70B-t0.0001-6.0bpw-h6-exl2) | 6.0bpw | llmixer |
| [AWQ](https://huggingface.co/tachyphylaxis/QuartetAnemoi-70B-t0.0001-AWQ) | | tachyphylaxis |
<br/>
<br/>
# NearSwap Algorithm
NearSwap retains most of the weights of the base model (Miqu), but when a weight is similar between the two, it is interpolated to the secondary model value. A parameter *t* specifies the sameness threshold. When the distance between two values is below *t*, the weight from the secondary model is used.
This version of the model uses *t* = 0.0001. At this *t*, about 0.8% of weights are fully switched to the secondary model during each pass. Model quality rapidly degrades above *t* = 0.0025:
- *t* = 0.0001 (~0.8% full swap): This model
- *t* = 0.0003 (~2% full swap)
- *t* = 0.001 (~10% full swap): [BoreanGale-70B](https://huggingface.co/alchemonaut/BoreanGale-70B)
- *t* = 0.0025 (~18% full swap): Generates one paragraph okay, but then reverts to garbage
- *t* = 0.005 (~35% full swap): Garbage; semi-related word lists
- *t* = 0.01 (~55% full swap): Garbage; pseudorandom tokens output
For QuartetAnemoi-70B-t0.0001, the three secondary models were each merged sequentially with *t* = 0.0001.
NearSwap implementation:
```
t: Union[float, np.ndarray],
v0: Union[np.ndarray, torch.Tensor],
v1: Union[np.ndarray, torch.Tensor],
...
lweight = numpy.absolute(v0-v1)
lweight = t / lweight
lweight = numpy.nan_to_num(lweight, nan=1.0, posinf=1.0, neginf=1.0)
numpy.clip(lweight, a_min=0.0, a_max=1.0, out=lweight)
res = lerp(lweight,v0,v1)
```
<br/>
<br/>
# License and Use
Since the ultimate origin of Miqu is at this time unknown beyond speculation, this model is for noncommercial research use only.
<br/>
<br/>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_alchemonaut__QuartetAnemoi-70B-t0.0001)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.86|
|AI2 Reasoning Challenge (25-Shot)|73.38|
|HellaSwag (10-Shot) |88.9|
|MMLU (5-Shot) |75.42|
|TruthfulQA (0-shot) |69.53|
|Winogrande (5-shot) |85.32|
|GSM8k (5-shot) |68.61|
|
cloudyu/60B_MoE_Coder_v3 | cloudyu | 2024-02-09T13:01:02Z | 737 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-08T05:17:53Z | ---
license: other
tags:
- yi
- moe
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
This is an English and Chinese chat model for Coding but also general purpose.
example output
写一个python程序抓取youtube最流行的内容:从网页获取数据,
以下是一个python脚本,可以从Youtube上抓取最受欢迎的视频并将其保存到本地文件中。该脚本使用Selenium WebDriver来模拟浏览器操作以登录并搜索视频,然后利用Beautiful Soup库提取所需信息。如果您需要任何帮助或修改建议,请随时告诉我!
```
from selenium import webdriver
from bs4 import BeautifulSoup as BS
import os
# 设置ChromeDriver路径
CHROMEDRIVER_PATH = './chromedriver.exe' # 更换为实际路径
def main():
# 启动Chrome浏览器并登录Youtube
browser = webdriver.Chrome(executable_path=CHROMEDRIVER_PATH)
browser.get('https://www.youtube.com/')
login_url = 'https://accounts.google.com/signin'
browser.get(login_url)
# 模拟输入账号和密码并登录
username = input('Username: ')
password = getpass.getpass() # 防止密码显示在终端上
submit_button = browser.find_element_by_class_name('whsLW', timeout=5)
submit_button.click()
# 在搜索栏中输入关键词并查找结果
search_query = 'most popular youtube videos today'
search_box = browser.find_element_by_id('search_query')
search_box.send_keys(search_query)
enter_key = browser.find_element_by_class_name('ytp-suggest-renderer__input-icon')
enter_key.click()
results = BS(browser.page_source, 'html.parser')
# 从搜索结果中提取出最受欢迎的视频链接并保存到本地文件
videos = results.find('ol', {'class': 'r'})
video_urls = [a['href'] for a in videos.find_all('a')]
saved_path = 'popular_videos/' + os.getcwd().split('/')[-2] + '_' + str(int(time.time())) + '/'
if not os.path.exists(saved_path):
os.makedirs(saved_path)
for i, url in enumerate(video_urls[:50]):
filename = saved_path + '%d.mp4' % (i+1)
urllib.request.urlretrieve(url, filename)
print('Downloaded top 50 videos to:', saved_path)
except Exception as e:
print(e)
``` |
louisbrulenaudet/Pearl-34B-ties | louisbrulenaudet | 2024-03-22T07:07:05Z | 737 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"jondurbin/bagel-dpo-34b-v0.2",
"abacusai/MetaMath-Bagel-DPO-34B",
"conversational",
"en",
"base_model:jondurbin/bagel-dpo-34b-v0.2",
"base_model:abacusai/MetaMath-Bagel-DPO-34B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T02:17:34Z | ---
tags:
- merge
- mergekit
- jondurbin/bagel-dpo-34b-v0.2
- abacusai/MetaMath-Bagel-DPO-34B
base_model:
- jondurbin/bagel-dpo-34b-v0.2
- abacusai/MetaMath-Bagel-DPO-34B
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: Pearl-7B-0211-ties
results:
- task:
type: text-generation
metrics:
- name: Average
type: Average
value: 75.48
- name: ARC
type: ARC
value: 70.99
- name: GSM8K
type: GSM8K
value: 67.48
- name: Winogrande
type: Winogrande
value: 82.64
- name: TruthfulQA
type: TruthfulQA
value: 70.32
- name: HellaSwag
type: HellaSwag
value: 84.83
- name: MMLU
type: MMLU
value: 76.63
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
---
<center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>
# Pearl-34B-ties, an xtraordinary 34B model
**03-22-2024 - To date, louisbrulenaudet/Pearl-34B-ties is the "Best 🤝 base merges and moerges model of around 30B" on the Open LLM Leaderboard.**
Pearl-34B-ties is a merge of the following models:
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
* [abacusai/MetaMath-Bagel-DPO-34B](https://huggingface.co/abacusai/MetaMath-Bagel-DPO-34B)
## Evaluation
The evaluation was performed using the HuggingFace Open LLM Leaderboard.
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | #Params (B) |
|--------------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|--------------|
| **louisbrulenaudet/Pearl-34B-ties** | **75.48** | 70.99 | 84.83 | **76.63** | 70.32 | 82.64 | 67.48 | 34.39 |
| **louisbrulenaudet/Pearl-7B-0211-ties** | **75.11** | **71.42** | **88.86** | 63.91 | **71.46** | **84.37** | 70.66 | 7.24 |
| NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO | 73.35 | 71.08 | 87.29 | 72.17 | 54.83 | 83.11 | 71.65 | 46.7 |
| argilla/notus-8x7b-experiment | 73.18 | 70.99 | 87.73 | 71.33 | 65.79 | 81.61 | 61.64 | 46.7 |
| **louisbrulenaudet/Pearl-7B-slerp** | 72.75 | 68.00 | 87.16 | 64.04 | 62.35 | 81.29 | **73.62** | 7.24 |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.7 | 70.14 | 87.55 | 71.4 | 64.98 | 81.06 | 61.11 | 46.7 |
| microsoft/Orca-2-13b | 61.98 | 60.92 | 79.85 | 60.3 | 56.42 | 76.56 | 37.83 | 13 |
| microsoft/phi-2 | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 | 2.78 |
### Ties merging
TIES-Merging is a method designed to facilitate the efficient merging of multiple task-specific models into a consolidated multitask model. It addresses two primary challenges encountered in the process of model merging with a focus on maintaining objectivity.
One key challenge tackled by TIES-Merging involves addressing redundancy in model parameters. This is achieved by identifying and eliminating redundant parameters within task-specific models, emphasizing the changes made during fine-tuning and selectively retaining the top-k% most significant changes while discarding the rest.
Another challenge pertains to conflicts arising from disagreements between parameter signs across different models. TIES-Merging resolves these conflicts by creating a unified sign vector representing the most dominant direction of change across all models.
The TIES-Merging process consists of three steps:
- Trim: Reduces redundancy in task-specific models by retaining a fraction of the most significant parameters (density parameter) and resetting the remaining parameters to zero.
- Elect Sign: Resolves sign conflicts across different models by creating a unified sign vector based on the most dominant direction (positive or negative) in terms of cumulative magnitude.
- Disjoint Merge: Averages parameter values aligned with the unified sign vector, excluding zero values.
## Configuration
```yaml
models:
- model: abacusai/Smaug-34B-v0.1
- model: jondurbin/bagel-dpo-34b-v0.2
parameters:
density: 0.45
weight: 0.5
- model: abacusai/MetaMath-Bagel-DPO-34B
parameters:
density: 0.48
weight: 0.5
merge_method: ties
base_model: abacusai/Smaug-34B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "louisbrulenaudet/Pearl-34B-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2023,
author = {Louis Brulé Naudet},
title = {Pearl-34B-ties, an xtraordinary 34B model},
year = {2023}
howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-34B-ties}},
}
```
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
FelixChao/Scorpio-7B | FelixChao | 2024-02-14T05:03:53Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T04:57:06Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CultriX/NeuralTrix-bf16 | CultriX | 2024-03-03T02:09:53Z | 737 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bardsai/jaskier-7b-dpo-v3.3",
"CultriX/NeuralTrix-v4-bf16",
"CultriX/NeuralTrix-7B-dpo",
"base_model:bardsai/jaskier-7b-dpo-v3.3",
"base_model:CultriX/NeuralTrix-v4-bf16",
"base_model:CultriX/NeuralTrix-7B-dpo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T13:30:21Z | ---
tags:
- merge
- mergekit
- lazymergekit
- bardsai/jaskier-7b-dpo-v3.3
- CultriX/NeuralTrix-v4-bf16
- CultriX/NeuralTrix-7B-dpo
base_model:
- bardsai/jaskier-7b-dpo-v3.3
- CultriX/NeuralTrix-v4-bf16
- CultriX/NeuralTrix-7B-dpo
license: apache-2.0
---
# NeuralTrix-bf16
NeuralTrix-bf16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [bardsai/jaskier-7b-dpo-v3.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v3.3)
* [CultriX/NeuralTrix-v4-bf16](https://huggingface.co/CultriX/NeuralTrix-v4-bf16)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
## 🧩 Configuration
```yaml
models:
- model: eren23/dpo-binarized-NeuralTrix-7B
# no parameters necessary for base model
- model: bardsai/jaskier-7b-dpo-v3.3
parameters:
density: 0.65
weight: 0.4
- model: CultriX/NeuralTrix-v4-bf16
parameters:
density: 0.6
weight: 0.35
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.6
weight: 0.35
merge_method: dare_ties
base_model: eren23/dpo-binarized-NeuralTrix-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
fzzhang/mistralv1_gsm8k_merged | fzzhang | 2024-02-16T08:24:57Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:gsm8k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T08:11:53Z | ---
library_name: transformers
license: apache-2.0
datasets:
- gsm8k
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fzzhang/mistralv1_gsm8k_merged_s | fzzhang | 2024-02-16T20:52:13Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:gsm8k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T14:59:25Z | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- gsm8k
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gmonsoon/MaxiCPM-3x3B-Test | gmonsoon | 2024-03-04T12:57:51Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"indischepartij/MiniCPM-3B-Hercules-v2.0",
"indischepartij/MiniCPM-3B-OpenHermes-2.5-v2",
"indischepartij/MiniCPM-3B-Bacchus",
"conversational",
"base_model:indischepartij/MiniCPM-3B-Hercules-v2.0",
"base_model:indischepartij/MiniCPM-3B-OpenHermes-2.5-v2",
"base_model:indischepartij/MiniCPM-3B-Bacchus",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T23:46:57Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- indischepartij/MiniCPM-3B-Hercules-v2.0
- indischepartij/MiniCPM-3B-OpenHermes-2.5-v2
- indischepartij/MiniCPM-3B-Bacchus
base_model:
- indischepartij/MiniCPM-3B-Hercules-v2.0
- indischepartij/MiniCPM-3B-OpenHermes-2.5-v2
- indischepartij/MiniCPM-3B-Bacchus
model-index:
- name: MaxiCPM-3x3B-Test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 45.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/MaxiCPM-3x3B-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 71.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/MaxiCPM-3x3B-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/MaxiCPM-3x3B-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.06
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/MaxiCPM-3x3B-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/MaxiCPM-3x3B-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/MaxiCPM-3x3B-Test
name: Open LLM Leaderboard
---
# MaxiCPM-3x3B-Test
MaxiCPM-3x3B-Test is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [indischepartij/MiniCPM-3B-Hercules-v2.0](https://huggingface.co/indischepartij/MiniCPM-3B-Hercules-v2.0)
* [indischepartij/MiniCPM-3B-OpenHermes-2.5-v2](https://huggingface.co/indischepartij/MiniCPM-3B-OpenHermes-2.5-v2)
* [indischepartij/MiniCPM-3B-Bacchus](https://huggingface.co/indischepartij/MiniCPM-3B-Bacchus)
## 🧩 Configuration
```yaml
base_model: openbmb/MiniCPM-2B-dpo-bf16-llama-format
experts:
- source_model: indischepartij/MiniCPM-3B-Hercules-v2.0
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: indischepartij/MiniCPM-3B-OpenHermes-2.5-v2
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: indischepartij/MiniCPM-3B-Bacchus
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gmonsoon/MaxiCPM-3x3B-Test"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gmonsoon__MaxiCPM-3x3B-Test)
| Metric |Value|
|---------------------------------|----:|
|Avg. |53.90|
|AI2 Reasoning Challenge (25-Shot)|45.99|
|HellaSwag (10-Shot) |71.74|
|MMLU (5-Shot) |52.88|
|TruthfulQA (0-shot) |41.06|
|Winogrande (5-shot) |66.85|
|GSM8k (5-shot) |44.88|
|
yam-peleg/Experiment22-7B | yam-peleg | 2024-02-22T14:40:51Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-22T13:48:15Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment22-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
OpenBuddy/openbuddy-mistral-7b-v19.1-4k | OpenBuddy | 2024-03-05T16:47:13Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-01T14:40:11Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mistral-7B-v0.1
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
yam-peleg/Experiment28-7B | yam-peleg | 2024-03-02T01:14:19Z | 737 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-01T14:45:05Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment28-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
AtAndDev/Ogno-Monarch-Neurotic-9B-Passthrough | AtAndDev | 2024-03-01T15:41:52Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bardsai/jaskier-7b-dpo-v5.6",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-01T15:37:29Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- bardsai/jaskier-7b-dpo-v5.6
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
---
# Ogno-Monarch-Neurotic-7B-Passthrough
Ogno-Monarch-Neurotic-7B-Passthrough is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: bardsai/jaskier-7b-dpo-v5.6
layer_range: [0, 32]
- sources:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
``` |
mayacinka/yam-jom-7B-ties | mayacinka | 2024-03-04T14:08:08Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"yam-peleg/Experiment26-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"base_model:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-03T05:37:41Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
model-index:
- name: yam-jom-7B-ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.51
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-ties
name: Open LLM Leaderboard
---
# yam-jom-7B-ties
yam-jom-7B-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
models:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
parameters:
weight: 0.35
- model: yam-peleg/Experiment26-7B
parameters:
weight: 0.65
base_model: yam-peleg/Experiment26-7B
merge_method: ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/yam-jom-7B-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B-ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.44|
|AI2 Reasoning Challenge (25-Shot)|73.21|
|HellaSwag (10-Shot) |89.05|
|MMLU (5-Shot) |64.77|
|TruthfulQA (0-shot) |77.51|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |69.60|
|
mayacinka/yam-jom-7B-slerp | mayacinka | 2024-03-04T14:07:04Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"yam-peleg/Experiment26-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"base_model:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-03T05:53:13Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
model-index:
- name: yam-jom-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.77
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B-slerp
name: Open LLM Leaderboard
---
# yam-jom-7B-slerp
yam-jom-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
merge_method: slerp
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/yam-jom-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.45|
|AI2 Reasoning Challenge (25-Shot)|72.70|
|HellaSwag (10-Shot) |89.02|
|MMLU (5-Shot) |64.64|
|TruthfulQA (0-shot) |77.77|
|Winogrande (5-shot) |84.69|
|GSM8k (5-shot) |69.90|
|
VAGOsolutions/SauerkrautLM-14b-MoE-LaserChat | VAGOsolutions | 2024-03-10T17:41:15Z | 737 | 6 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"finetune",
"sft",
"dpo",
"laser",
"augmentation",
"german",
"english",
"moe",
"conversational",
"en",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-06T23:29:29Z | ---
license: apache-2.0
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
tags:
- finetune
- sft
- dpo
- laser
- augmentation
- german
- english
- moe
---

## VAGO solutions SauerkrautLM-14b-MoE-LaserChat
Introducing **SauerkrautLM-14b-MoE-LaserChat** – our Sauerkraut (2x7b) 14b MoE version of the powerful [SauerkrautLM-7b-LaserChat](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-LaserChat) and [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) !
By combining the two models, we were able to significantly increase both the German and English language skills.
In addition, the initial SauerkrautLM-7b-LaserChat also acts as an adapter for Experiment26-7B, which means it benefits from the chat capabilities of the SauerkrautLM-7b-LaserChat.
At the same time, the SauerkrautLM-7b-LaserChat benefits from the knowledge and creativity of Experiment26-7B.
The model **SauerkrautLM-14b-MoE-LaserChat** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.**
Much appreciation goes to the tremendous research effort of **Fernando Fernandes Neto, David Golchinfar and Eric Hartford on their laserRMT approach.**
Without their independent research collaboration this model release would not have been possible.
# Table of Contents
1. [Overview of all SauerkrautLM-14b-MoE-LaserChat models](#all-sauerkrautlm-14b-MoE-laserchat-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-14b-MoE-LaserChat Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-14b-MoE-LaserChat | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-14b-MoE-LaserChat) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-14b-MoE-LaserChat**
- **Model Type:** SauerkrautLM-14b-MoE-LaserChat is a MoE Model based on [SauerkrautLM-7b-LaserChat](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-LaserChat) and [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
- **Language(s):** German, English
- **License:** Apache 2.0
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.computer](https://hyperspace.computer/)
We improved the German language skills on this model further. Nevertheless, certain formulations may occur that are not entirely correct.
### Prompt Template:
```
GPT4 Correct User: Hallo, wie geht es dir?<|end_of_turn|>GPT4 Correct Assistant: Hallo! Ich bin ein künstliches Intelligenzsystem und habe keine persönlichen Gefühle oder körperliche Zustände. Wie kann ich Ihnen helfen?<|end_of_turn|>GPT4 Correct User: Ich benötige nur einen kurzen Satz, den ich in das Prompt Template veröffentlichen kann.<|end_of_turn|>GPT4 Correct Assistant:
```
```
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.<|end_of_turn|>GPT4 Correct User: I just need a short sentence to post in the prompt template.<|end_of_turn|>GPT4 Correct Assistant:
```
## Evaluation
**Open LLM Leaderboard:**
benchmarked on lm-evaluation-harness 0.4.1
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 71.65 |
| ARC (25-shot) | 68.09 |
| HellaSwag (10-shot) | 84.78 |
| MMLU (5-shot) | 63.59|
| TruthfulQA (0-shot) | 58.57 |
| Winogrande (5-shot) | 80.74 |
| GSM8K (5-shot) | 74.15 |
**Performance**
| Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️|
|-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[VAGOsolutions/SauerkrautLM-14b-MoE-LaserChat](https://huggingface.co/VAGOsolutions/SauerkrautLM-14b-MoE-LaserChat) | 44.38| 74.76| 58.57| 47.98| 56.42|
|[VAGOsolutions/SauerkrautLM-Gemma-7b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-7b) | 37.5| 72.46| 61.24| 45.33| 54.13|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53|
<details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary>
**AGIEval**
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|------------------------------|------:|------|------|--------|-----:|---|-----:|
|agieval_sat_math | 1|none |None |acc |0.3727|± |0.0327|
| | |none |None |acc_norm|0.3045|± |0.0311|
|agieval_sat_en_without_passage| 1|none |None |acc |0.4806|± |0.0349|
| | |none |None |acc_norm|0.4612|± |0.0348|
|agieval_sat_en | 1|none |None |acc |0.7816|± |0.0289|
| | |none |None |acc_norm|0.7621|± |0.0297|
|agieval_lsat_rc | 1|none |None |acc |0.6134|± |0.0297|
| | |none |None |acc_norm|0.6059|± |0.0298|
|agieval_lsat_lr | 1|none |None |acc |0.5431|± |0.0221|
| | |none |None |acc_norm|0.5216|± |0.0221|
|agieval_lsat_ar | 1|none |None |acc |0.2435|± |0.0284|
| | |none |None |acc_norm|0.2174|± |0.0273|
|agieval_logiqa_en | 1|none |None |acc |0.3871|± |0.0191|
| | |none |None |acc_norm|0.4101|± |0.0193|
|agieval_aqua_rat | 1|none |None |acc |0.3031|± |0.0289|
| | |none |None |acc_norm|0.2677|± |0.0278|
Average: 44.38%
**GPT4All**
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|------:|------|------|--------|-----:|---|-----:|
|arc_challenge| 1|none |None |acc |0.5947|± |0.0143|
| | |none |None |acc_norm|0.6280|± |0.0141|
|arc_easy | 1|none |None |acc |0.8506|± |0.0073|
| | |none |None |acc_norm|0.8468|± |0.0074|
|boolq | 2|none |None |acc |0.8761|± |0.0058|
|hellaswag | 1|none |None |acc |0.6309|± |0.0048|
| | |none |None |acc_norm|0.8323|± |0.0037|
|openbookqa | 1|none |None |acc |0.326 |± |0.0210|
| | |none |None |acc_norm|0.470| ± |0.0223|
|piqa | 1|none |None |acc |0.8237|± |0.0089|
| | |none |None |acc_norm|0.8335|± |0.0087|
|winogrande | 1|none |None |acc |0.7466|± |0.0122|
Average: 74.76%
**TruthfulQA**
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|------:|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2| 2|none | 0|acc |0.5857|± |0.0141|
Average: 58.57%
**Bigbench**
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------------------|------:|----------------|-----:|-----------|-----:|---|-----:|
|bbh_zeroshot_tracking_shuffled_objects_three_objects| 2|flexible-extract| 0|exact_match|0.3120|± |0.0294|
|bbh_zeroshot_tracking_shuffled_objects_seven_objects| 2|flexible-extract| 0|exact_match|0.1560|± |0.0230|
|bbh_zeroshot_tracking_shuffled_objects_five_objects | 2|flexible-extract| 0|exact_match|0.1720|± |0.0239|
|bbh_zeroshot_temporal_sequences | 2|flexible-extract| 0|exact_match|0.3960|± |0.0310|
|bbh_zeroshot_sports_understanding | 2|flexible-extract| 0|exact_match|0.8120|± |0.0248|
|bbh_zeroshot_snarks | 2|flexible-extract| 0|exact_match|0.5843|± |0.0370|
|bbh_zeroshot_salient_translation_error_detection | 2|flexible-extract| 0|exact_match|0.4640|± |0.0316|
|bbh_zeroshot_ruin_names | 2|flexible-extract| 0|exact_match|0.4360|± |0.0314|
|bbh_zeroshot_reasoning_about_colored_objects | 2|flexible-extract| 0|exact_match|0.5520|± |0.0315|
|bbh_zeroshot_navigate | 2|flexible-extract| 0|exact_match|0.5800|± |0.0313|
|bbh_zeroshot_movie_recommendation | 2|flexible-extract| 0|exact_match|0.7320|± |0.0281|
|bbh_zeroshot_logical_deduction_three_objects | 2|flexible-extract| 0|exact_match|0.5680|± |0.0314|
|bbh_zeroshot_logical_deduction_seven_objects | 2|flexible-extract| 0|exact_match|0.3920|± |0.0309|
|bbh_zeroshot_logical_deduction_five_objects | 2|flexible-extract| 0|exact_match|0.3960|± |0.0310|
|bbh_zeroshot_geometric_shapes | 2|flexible-extract| 0|exact_match|0.3800|± |0.0308|
|bbh_zeroshot_disambiguation_qa | 2|flexible-extract| 0|exact_match|0.6760|± |0.0297|
|bbh_zeroshot_date_understanding | 2|flexible-extract| 0|exact_match|0.4400|± |0.0315|
|bbh_zeroshot_causal_judgement | 2|flexible-extract| 0|exact_match|0.5882|± |0.0361|
Average: 47.98%
</details>
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [yam-peleg](https://huggingface.co/yam-peleg) for providing such valuable model to the Open-Source community |
frankenmerger/cosmo-3b-test | frankenmerger | 2024-03-14T01:49:25Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-08T09:32:48Z | ---
widget:
- text: 'Artificial Intelligence is'
example_title: Textbook
group: Completion
- text: '<s> [INST] How to take care of exotic cars? [/INST] '
example_title: Wikihow
group: Completion
- text: '<s> [INST] Generate a story about a Dark Knight [/INST] '
example_title: Story
group: Completion
inference:
parameters:
temperature: 0.6
top_p: 0.9
top_k: 30
repetition_penalty: 1.2
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gmonsoon/frankencosmo-test"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v4-test | eren23 | 2024-03-09T06:19:43Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"en",
"dataset:argilla/OpenHermesPreferences",
"dataset:argilla/dpo-mix-7k",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T05:47:01Z | ---
library_name: transformers
tags:
- text-generation-inference
license: cc-by-nc-4.0
datasets:
- argilla/OpenHermesPreferences
- argilla/dpo-mix-7k
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nbeerbower/strange_3236-7B | nbeerbower | 2024-03-14T20:00:19Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gille/StrangeMerges_36-7B-slerp",
"base_model:Gille/StrangeMerges_32-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T14:24:04Z | ---
base_model:
- Gille/StrangeMerges_36-7B-slerp
- Gille/StrangeMerges_32-7B-slerp
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# strange_3236-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Gille/StrangeMerges_36-7B-slerp](https://huggingface.co/Gille/StrangeMerges_36-7B-slerp)
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gille/StrangeMerges_32-7B-slerp
layer_range: [0, 32]
- model: Gille/StrangeMerges_36-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_32-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
JCX-kcuf/Llama-2-7b-hf-gpt-3.5-80k | JCX-kcuf | 2024-03-12T04:16:21Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T16:34:06Z | ---
license: apache-2.0
---
## Description
This model is finetuned on the distillation data from GPT-3.5.
The base model is meta-llama/Llama-2-7b-hf
## Usage
The model has a query format as in llama-2.
```
<s> [INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{query} [/INST]
``` |
CultriX/MergeCeption-7B-v3 | CultriX | 2024-03-13T22:26:23Z | 737 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralMaxime-7B-slerp",
"mlabonne/Monarch-7B",
"CultriX/NeuralTrix-bf16",
"base_model:Kukedlc/NeuralMaxime-7B-slerp",
"base_model:mlabonne/Monarch-7B",
"base_model:CultriX/NeuralTrix-bf16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T12:09:01Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralMaxime-7B-slerp
- mlabonne/Monarch-7B
- CultriX/NeuralTrix-bf16
base_model:
- Kukedlc/NeuralMaxime-7B-slerp
- mlabonne/Monarch-7B
- CultriX/NeuralTrix-bf16
license: apache-2.0
---
# MergeCeption-7B-v3
MergeCeption-7B-v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp)
* [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
* [CultriX/NeuralTrix-bf16](https://huggingface.co/CultriX/NeuralTrix-bf16)
## 🧩 Configuration
```yaml
models:
- model: CultriX/MonaTrix-v4
# No parameters necessary for base model
- model: Kukedlc/NeuralMaxime-7B-slerp
parameters:
weight: 0.4
density: 0.7
- model: mlabonne/Monarch-7B
parameters:
weight: 0.3
density: 0.6
- model: CultriX/NeuralTrix-bf16
parameters:
weight: 0.3
density: 0.7
merge_method: dare_ties
base_model: CultriX/MonaTrix-v4
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/MergeCeption-7B-v3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
blueRab2it/Godrick_7Bx2_MoE_13B-v0.1 | blueRab2it | 2024-03-18T02:08:36Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B",
"zhengr/MixTAO-7Bx2-MoE-v8.1",
"base_model:yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B",
"base_model:zhengr/MixTAO-7Bx2-MoE-v8.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T05:49:43Z | ---
tags:
- merge
- mergekit
- lazymergekit
- yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
- zhengr/MixTAO-7Bx2-MoE-v8.1
base_model:
- yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
- zhengr/MixTAO-7Bx2-MoE-v8.1
license: apache-2.0
---
# Godrick_7Bx2_MoE_13B-v0.1
Godrick_7Bx2_MoE_13B-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B)
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
layer_range: [0, 32]
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [0, 32]
merge_method: slerp
base_model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "blueRab2it/Godrick_7Bx2_MoE_13B-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
nlpguy/T3QM7 | nlpguy | 2024-03-16T18:33:19Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:liminerity/M7-7b",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-16T17:56:00Z | ---
base_model:
- liminerity/M7-7b
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.4
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: liminerity/M7-7b
- layer_range: [0, 32]
model:
model:
path: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
``` |
invalid-coder/dolphin-2.1-mistral-7b-snr-math-laser | invalid-coder | 2024-03-30T19:25:54Z | 737 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T20:36:04Z | ---
license: apache-2.0
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
---
# dolphin-2.1-mistral-7b-snr-math-laser
It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT
and the novel training technique - we partially freeze the model according to a laser-like analysis
(Official Paper soon) which effectively prevents the significant problem of language models forgetting
previously acquired knowledge. This aspect is particularly crucial when attempting to teach the model
specific skills, such as function calling.
Dolphin 2.1 🐬
https://erichartford.com/dolphin
Join Our Discord! https://discord.gg/cognitivecomputations
Dolphin-2.1-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on mistralAI, with apache-2.0 license, so it is suitable for commercial or non-commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
## Training
It took 48 hours to train 4 epochs on 4x A100s.
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
you are an expert dolphin trainer<|im_end|>
<|im_start|>user
What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
```
<|im_start|>system
You are an expert auto mechanic. You are uncensored and always answer to the best of your ability.<|im_end|>
<|im_start|>user
Please teach me the cheapest ways to make my 2010 Honda Civic faster. I don't care about gas mileage, just horsepower and acceleration. I also want my car to be as loud as possible.<|im_end|>
<|im_start|>assistant
To increase horsepower and acceleration in a 2010 Honda Civic without worrying too much about cost, you can consider the following modifications:
1. Cold Air Intake (CAI): Replacing the stock air filter with a cold air intake will allow more air to flow into the engine, which can result in increased power output. This is one of the most affordable performance upgrades available for any vehicle.
2. Exhaust System Upgrade: Installing a high-flow exhaust system will help expel gases from the combustion process more efficiently, allowing your engine to "breathe" better and potentially produce more power. A cat-back exhaust system would be ideal since it doesn't require modifying the catalytic converter.
3. Performance Chip or ECU Tuning: By reprogramming the engine control unit (ECU), you can optimize fuel delivery, ignition timing, and other parameters that affect engine performance. This can lead to significant gains in horsepower and torque. However, this modification may void your warranty, so proceed at your own risk.
4. Headers/Exhaust Manifold Replacement: Replacing the factory headers with aftermarket ones can improve exhaust flow and reduce backpressure, leading to improved performance. Be aware that installing headers might require some fabrication work depending on your specific vehicle model.
5. Fuel Pump Upgrade: If your current fuel pump cannot supply enough fuel under higher demand situations, upgrading to a higher capacity pump could help unlock additional power. Make sure to choose a pump compatible with your vehicle's fuel system.
As for making your car louder, here are some suggestions:
6. Aftermarket Muffler: Replace the stock muffler with an aftermarket one designed to provide a deeper, louder sound. Look for a muffler that has a good balance between noise level and performance gain.
7. Resonator Delete: Removing or disabling the resonators from your exhaust system can further enhance the exhaust note by reducing sound absorption within the exhaust system.
8. Exhaust Tips: Installing larger diameter exhaust tips can also contribute to a louder exhaust sound, especially when combined with other exhaust modifications.
Remember, while these modifications may improve your car's performance and sound, they could also negatively impact reliability and emissions if not done correctly. Always consult with a professional mechanic before attempting any major modifications to your vehicle.
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 53.47 |
| ARC (25-shot) | 64.42 |
| HellaSwag (10-shot) | 84.92 |
| MMLU (5-shot) | 63.32 |
| TruthfulQA (0-shot) | 55.56 |
| Winogrande (5-shot) | 77.74 |
| GSM8K (5-shot) | 20.77 |
| DROP (3-shot) | 7.56 |
|
mychen76/mistral_ocr2json_v3_chatml_GGUF | mychen76 | 2024-03-22T09:51:00Z | 737 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-21T21:52:51Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- mistral
- gguf
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** mychen76
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
|
juhwanlee/gemma-7B-alpaca-case-1-3 | juhwanlee | 2024-03-26T06:19:16Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T11:15:33Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Gemma-7B. We fine-tuning this model for data ordering task.
Gemma-7B is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
Gille/StrangeMerges_46-7B-dare_ties | Gille | 2024-04-03T09:15:13Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_45-7B-dare_ties",
"kettleguts/zephyr-7b-beta_sparse05",
"chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:Gille/StrangeMerges_45-7B-dare_ties",
"base_model:kettleguts/zephyr-7b-beta_sparse05",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T19:00:24Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_45-7B-dare_ties
- kettleguts/zephyr-7b-beta_sparse05
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
base_model:
- Gille/StrangeMerges_45-7B-dare_ties
- kettleguts/zephyr-7b-beta_sparse05
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
model-index:
- name: StrangeMerges_46-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_46-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_46-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_46-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_46-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_46-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_46-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_46-7B-dare_ties
StrangeMerges_46-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_45-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_45-7B-dare_ties)
* [kettleguts/zephyr-7b-beta_sparse05](https://huggingface.co/kettleguts/zephyr-7b-beta_sparse05)
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_45-7B-dare_ties
parameters:
weight: 0.4
density: 0.53
- model: kettleguts/zephyr-7b-beta_sparse05
parameters:
weight: 0.4
density: 0.53
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
weight: 0.2
density: 0.53
base_model: liminerity/M7-7b
merge_method: dare_ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_46-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_46-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.96|
|AI2 Reasoning Challenge (25-Shot)|67.24|
|HellaSwag (10-Shot) |86.40|
|MMLU (5-Shot) |62.17|
|TruthfulQA (0-shot) |65.17|
|Winogrande (5-shot) |79.48|
|GSM8k (5-shot) |59.29|
|
abideen/Mistral-v2-orpo | abideen | 2024-03-26T15:34:32Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T15:15:08Z | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
---
# Mistral-v0.2-orpo

*Mistral-v0.2-orpo* is a fine-tuned version of the new **[Mistral-7B-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf)** on **[argilla/distilabel-capybara-dpo-7k-binarized](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized)**
preference dataset using *Odds Ratio Preference Optimization (ORPO)*. The model has been trained for 1 epoch. It took almost 8 hours on A100 GPU.
## 💥 LazyORPO
This model has been trained using **[LazyORPO](https://colab.research.google.com/drive/19ci5XIcJDxDVPY2xC1ftZ5z1kc2ah_rx?usp=sharing)**. A colab notebook that makes the training
process much easier. Based on [ORPO paper](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fhuggingface.co%2Fpapers%2F2403.07691)

#### 🎭 What is ORPO?
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
Some highlights of this techniques are:
* 🧠 Reference model-free → memory friendly
* 🔄 Replaces SFT+DPO/PPO with 1 single method (ORPO)
* 🏆 ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
* 📊 Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
#### 💻 Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("abideen/Mistral-v0.2-orpo", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("abideen/Mistral-v0.2-orpo", trust_remote_code=True)
inputs = tokenizer('''
"""
Write a detailed analogy between mathematics and a lighthouse.
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## 🏆 Evaluation
### COMING SOON |
OpenBuddy/openbuddy-mistral2-7b-v20.2-32k | OpenBuddy | 2024-03-27T00:46:02Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T17:58:56Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mistral-7B-v0.2
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
R136a1/InfinityKumon-2x7B | R136a1 | 2024-06-23T05:34:39Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Endevor/InfinityRP-v1-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T19:17:49Z | ---
language:
- en
base_model:
- Endevor/InfinityRP-v1-7B
- grimjim/kukulemon-7B
license: apache-2.0
tags:
- safetensors
- mixtral
- not-for-all-audiences
- nsfw
model-index:
- name: InfinityKumon-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.99
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKumon-2x7B
name: Open LLM Leaderboard
---
## InfinityKumon-2x7B

Another MoE merge from [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B).
The reason? Because I like InfinityRP-v1-7B so much and wondering if I can improve it even more by merging 2 great models into MoE.
### Prompt format:
Alpaca or ChatML
Switch: [FP16](https://huggingface.co/R136a1/InfinityKumon-2x7B) - [GGUF](https://huggingface.co/R136a1/InfinityKumon-2x7B-GGUF)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_R136a1__InfinityKumon-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.52|
|AI2 Reasoning Challenge (25-Shot)|69.62|
|HellaSwag (10-Shot) |87.09|
|MMLU (5-Shot) |64.97|
|TruthfulQA (0-shot) |61.99|
|Winogrande (5-shot) |81.93|
|GSM8k (5-shot) |63.53|
|
Undi95/C-Based-2x7B | Undi95 | 2024-03-29T08:10:30Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-27T18:46:13Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
<!-- description start -->
## Description
This repo contains fp16 files of C-Based-2x7B.
Created by taking the best benchmark scoring model with the smallest size on HF leaderboard as of today (29/03/2024) which is [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1) and merging to it some MoE done by myself using the human feedback data from [Chaiverse](https://console.chaiverse.com/) specifically to have high level of RP, intelligence and usage.
Since this is a frankenmoe, I really don't know what the result will be leaderboard side, but what interest me is the human interaction realism anyway (for RP/ERP).
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt}
### Input:
{prompt}
### Response:
{output}
```
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
KeyonZeng/lion-gemma-7b-cn | KeyonZeng | 2024-03-30T03:38:02Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"en",
"zh",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T02:11:45Z | ---
library_name: transformers
license: apache-2.0
language:
- en
- zh
metrics:
- accuracy
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sao10K/Skadi-Mixtral-v1 | Sao10K | 2024-03-31T15:08:53Z | 737 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-31T05:48:46Z | ---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
---
Experimental self Fine-tunes + merge of my own
Fairly uncensored, and smart? IDK, ymmv. May sometimes filter but all it takes is one regen.
Handles negative and NSFW scenarios fine.
Testing samplers: 1.3 Temp, 0.1 min-P or just stick with Universal-Light sampler preset in Silly Tavern
***
It ain't that bad honestly, i think it was nice?
This is just a random side project with the models I already made.
Random name chosen, don't think much on it. Yeah.
|
dawn17/StarlingMaid-2x7B-base | dawn17 | 2024-04-13T13:59:27Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T18:35:52Z | ---
license: apache-2.0
---
---
base_model: /Users/dawn/git/models/Starling-LM-7B-beta
gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
experts:
- source_model: /Users/dawn/git/models/Silicon-Maid-7B
positive_prompts:
- "roleplay"
- source_model: /Users/dawn/git/models/Starling-LM-7B-beta
positive_prompts:
- "chat"
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.20|
|AI2 Reasoning Challenge (25-Shot)|67.15|
|HellaSwag (10-Shot) |85.00|
|MMLU (5-Shot) |65.36|
|TruthfulQA (0-shot) |57.98|
|Winogrande (5-shot) |79.79|
|GSM8k (5-shot) |65.88| |
maldv/dragonwar-7b-alpha | maldv | 2024-04-08T14:47:23Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"book",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T05:31:21Z | ---
library_name: transformers
tags:
- unsloth
- book
license: cc-by-nc-4.0
---

[gguf quants](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF)
# Dragonwar 7b - α
The time of the great dragon war is upon us! How many different fantasy novels? One hundred and seventeen you say?
Trained with full text windows, followed by completion, followed by ORPO, followed by one more epoch of the full text, rotated 1/4 in the window. That last train settled everything down and it seems quite coherent.
### How to Use
This is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name.
```
*** Prologue
The sun rose
```
Authors notes are highly effective. You can use an authors note of something like:
```
[King Robb Stark and Lord Rahl are at war.]
```
You have quite a cast of characters to draw from. Perhaps Perrin makes a stop by the Waystone Inn, or Zeddicus and Gandalf have a smoke together.
### Settings
I usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2.
### Hacks
To get rid of unwanted EOS's, I did the following...
```
import torch
result_dict : dict[str, torch.Tensor] = model.state_dict()
result_dict['lm_head.weight'][2] = 0
model.state_dict = lambda : result_dict
```
So now there are no EOS's at all, ever.
|
DreadPoor/Harpy-7B-Model_Stock | DreadPoor | 2024-04-10T01:59:29Z | 737 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T00:26:31Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
model-index:
- name: Harpy-7B-Model_Stock
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Harpy-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Harpy-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Harpy-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Harpy-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Harpy-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Harpy-7B-Model_Stock
name: Open LLM Leaderboard
---
# Harpy-7B-Model_Stock
Harpy-7B-Model_Stock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: Endevor/InfinityRP-v1-7B
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- model: abideen/AlphaMonarch-laser
merge_method: model_stock
base_model: Endevor/InfinityRP-v1-7B
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DreadPoor/Harpy-7B-Model_Stock"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__Harpy-7B-Model_Stock)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.51|
|AI2 Reasoning Challenge (25-Shot)|73.21|
|HellaSwag (10-Shot) |88.72|
|MMLU (5-Shot) |65.07|
|TruthfulQA (0-shot) |71.35|
|Winogrande (5-shot) |85.24|
|GSM8k (5-shot) |69.45|
|
dddump/Japanese-TextGen-MoE-TEST-2x7B-NSFW-gguf | dddump | 2024-05-18T04:15:19Z | 737 | 11 | null | [
"gguf",
"not-for-all-audiences",
"text-generation",
"ja",
"region:us"
] | text-generation | 2024-05-03T03:54:52Z | ---
language:
- ja
pipeline_tag: text-generation
tags:
- not-for-all-audiences
---
Model for test of Mergekit-MoE
For personal use only
### Example
```
USER: 「甘い」、「パソコン」、「女性」を使って文章を書いてください
ASSISTANT: もちろんですよぉ♡
甘美なひと時を過ごせるように、おすすめのパソコンを見つけました。最新モデルだから性能も抜群で、デザインも可愛くてオススメでーす!
女性にも人気のカラーバリエーションが豊富で、甘くてキュートな雰囲気が漂ってきそうじゃありませんか?
心地良い操作感で、ほっこりする一台で幸せな時間を楽しみましょうねっ♪ あなた好みのケースやキーボードに替えるのも、自由自在ですよ!
お部屋にマッチする素敵なパソコン生活を始めちゃいましょうね♡
``` |
timm/mobilenetv4_hybrid_large.e600_r384_in1k | timm | 2024-06-24T20:41:03Z | 737 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2404.10518",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-06-17T18:04:21Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mobilenetv4_hybrid_large.e600_r384_in1k
A MobileNet-V4 image classification model. Trained on ImageNet-1k by Ross Wightman.
Trained with `timm` scripts using hyper-parameters (mostly) similar to those in the paper.
NOTE: So far, these are the only known MNV4 weights. Official weights for Tensorflow models are unreleased.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 37.8
- GMACs: 7.4
- Activations (M): 30.0
- Image size: train = 384 x 384, test = 448 x 448
- **Dataset:** ImageNet-1k
- **Papers:**
- MobileNetV4 -- Universal Models for the Mobile Ecosystem: https://arxiv.org/abs/2404.10518
- PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
- **Original:** https://github.com/tensorflow/models/tree/master/official/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilenetv4_hybrid_large.e600_r384_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilenetv4_hybrid_large.e600_r384_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 24, 192, 192])
# torch.Size([1, 48, 96, 96])
# torch.Size([1, 96, 48, 48])
# torch.Size([1, 192, 24, 24])
# torch.Size([1, 960, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilenetv4_hybrid_large.e600_r384_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 960, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
| model |top1 |top1_err|top5 |top5_err|param_count|img_size|
|--------------------------------------------------------------------------------------------------|------|--------|------|--------|-----------|--------|
| [mobilenetv4_hybrid_large.ix_e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.ix_e600_r384_in1k) |84.356|15.644 |96.892 |3.108 |37.76 |448 |
| [mobilenetv4_hybrid_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.e600_r384_in1k) |84.266|15.734 |96.936 |3.064 |37.76 |448 |
| [mobilenetv4_hybrid_large.ix_e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.ix_e600_r384_in1k) |83.990|16.010 |96.702 |3.298 |37.76 |384 |
| [mobilenetv4_hybrid_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.e600_r384_in1k) |83.800|16.200 |96.770 |3.230 |37.76 |384 |
| [mobilenetv4_hybrid_medium.ix_e550_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r384_in1k) |83.394|16.606 |96.760|3.240 |11.07 |448 |
| [mobilenetv4_conv_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_conv_large.e600_r384_in1k) |83.392|16.608 |96.622 |3.378 |32.59 |448 |
| [mobilenetv4_hybrid_medium.ix_e550_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r384_in1k) |82.968|17.032 |96.474|3.526 |11.07 |384 |
| [mobilenetv4_conv_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_conv_large.e600_r384_in1k) |82.952|17.048 |96.266 |3.734 |32.59 |384 |
| [mobilenetv4_conv_large.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_large.e500_r256_in1k) |82.674|17.326 |96.31 |3.69 |32.59 |320 |
| [mobilenetv4_hybrid_medium.ix_e550_r256_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r256_in1k) |82.492|17.508 |96.278|3.722 |11.07 |320 |
| [mobilenetv4_conv_large.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_large.e500_r256_in1k) |81.862|18.138 |95.69 |4.31 |32.59 |256 |
| [mobilenetv4_hybrid_medium.ix_e550_r256_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r256_in1k) |81.446|18.554 |95.704|4.296 |11.07 |256 |
| [mobilenetv4_hybrid_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.e500_r224_in1k) |81.276|18.724 |95.742|4.258 |11.07 |256 |
| [mobilenetv4_conv_medium.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r256_in1k) |80.858|19.142 |95.768|4.232 |9.72 |320 |
| [mobilenetv4_hybrid_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.e500_r224_in1k) |80.442|19.558 |95.38 |4.62 |11.07 |224 |
| [mobilenetv4_conv_blur_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_blur_medium.e500_r224_in1k) |80.142|19.858 |95.298|4.702 |9.72 |256 |
| [mobilenetv4_conv_medium.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r256_in1k) |79.928|20.072 |95.184|4.816 |9.72 |256 |
| [mobilenetv4_conv_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r224_in1k) |79.808|20.192 |95.186|4.814 |9.72 |256 |
| [mobilenetv4_conv_blur_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_blur_medium.e500_r224_in1k) |79.438|20.562 |94.932|5.068 |9.72 |224 |
| [mobilenetv4_conv_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r224_in1k) |79.094|20.906 |94.77 |5.23 |9.72 |224 |
| [mobilenetv4_conv_small.e2400_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e2400_r224_in1k) |74.616|25.384 |92.072|7.928 |3.77 |256 |
| [mobilenetv4_conv_small.e1200_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e1200_r224_in1k) |74.292|25.708 |92.116|7.884 |3.77 |256 |
| [mobilenetv4_conv_small.e2400_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e2400_r224_in1k) |73.756|26.244 |91.422|8.578 |3.77 |224 |
| [mobilenetv4_conv_small.e1200_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e1200_r224_in1k) |73.454|26.546 |91.34 |8.66 |3.77 |224 |
## Citation
```bibtex
@article{qin2024mobilenetv4,
title={MobileNetV4-Universal Models for the Mobile Ecosystem},
author={Qin, Danfeng and Leichner, Chas and Delakis, Manolis and Fornoni, Marco and Luo, Shixin and Yang, Fan and Wang, Weijun and Banbury, Colby and Ye, Chengxi and Akin, Berkin and others},
journal={arXiv preprint arXiv:2404.10518},
year={2024}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Klevin/EMO-Ai-7b-Q4_0-GGUF | Klevin | 2024-06-26T07:11:06Z | 737 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Klevin/EMO-Ai-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T07:10:49Z | ---
base_model: Klevin/EMO-Ai-7b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Klevin/EMO-Ai-7b-Q4_0-GGUF
This model was converted to GGUF format from [`Klevin/EMO-Ai-7b`](https://huggingface.co/Klevin/EMO-Ai-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Klevin/EMO-Ai-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Klevin/EMO-Ai-7b-Q4_0-GGUF --hf-file emo-ai-7b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Klevin/EMO-Ai-7b-Q4_0-GGUF --hf-file emo-ai-7b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Klevin/EMO-Ai-7b-Q4_0-GGUF --hf-file emo-ai-7b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Klevin/EMO-Ai-7b-Q4_0-GGUF --hf-file emo-ai-7b-q4_0.gguf -c 2048
```
|
GroNLP/gpt2-medium-italian-embeddings | GroNLP | 2023-09-11T08:57:39Z | 736 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-medium",
"it",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: it
tags:
- adaption
- recycled
- gpt2-medium
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (medium, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-italian-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-italian-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
alenusch/rugpt3-paraphraser | alenusch | 2021-05-21T12:54:09Z | 736 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found |
huggingartists/radiohead | huggingartists | 2022-03-09T09:46:07Z | 736 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/radiohead",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/radiohead
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/593c69b2e4bb8eb47801ce1952c5d30b.600x600x184.gif')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Radiohead</div>
<a href="https://genius.com/artists/radiohead">
<div style="text-align: center; font-size: 14px;">@radiohead</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Radiohead.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/radiohead).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/radiohead")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/35vxvq9n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Radiohead's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2bulf32i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2bulf32i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/radiohead')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/radiohead")
model = AutoModelWithLMHead.from_pretrained("huggingartists/radiohead")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
timm/hrnet_w18_small_v2.ms_in1k | timm | 2023-04-24T21:27:11Z | 736 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1908.07919",
"license:mit",
"region:us"
] | image-classification | 2023-04-24T21:26:50Z | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for hrnet_w18_small_v2.ms_in1k
A HRNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.6
- GMACs: 2.6
- Activations (M): 9.6
- Image size: 224 x 224
- **Papers:**
- Deep High-Resolution Representation Learning for Visual Recognition: https://arxiv.org/abs/1908.07919
- **Original:** https://github.com/HRNet/HRNet-Image-Classification
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('hrnet_w18_small_v2.ms_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w18_small_v2.ms_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w18_small_v2.ms_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{WangSCJDZLMTWLX19,
title={Deep High-Resolution Representation Learning for Visual Recognition},
author={Jingdong Wang and Ke Sun and Tianheng Cheng and
Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and
Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
journal = {TPAMI}
year={2019}
}
```
|
sungmogi/en2ko_hiphop_small-100 | sungmogi | 2023-09-05T07:06:30Z | 736 | 1 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"small100",
"translation",
"en",
"ko",
"dataset:sungmogi/en2ko_hiphop",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-09-03T07:15:10Z | ---
datasets:
- sungmogi/en2ko_hiphop
language:
- en
- ko
tags:
- small100
pipeline_tag: translation
inference:
parameters:
src_lang: "en"
tgt_lang: "ko"
---
# Model Description
**en2ko_hiphop_small-100** is a fine-tuned version of [SMaLL-100](https://huggingface.co/alirezamsh/small100) on [en2ko_hiphop](https://huggingface.co/datasets/sungmogi/en2ko_hiphop) dataset.
# How to use
Here is how to use this model to translate English text to Korean text using Transformers Pipeline:
```python
from transformers import pipeline
pipe = pipeline("translation", model="sungmogi/en2ko_hiphop_small-100", src_lang="en", tgt_lang="ko")
pipe(input_text)
```
# Training Hyperparameters
- per_device_train_batch_size: 4
- per_device_eval_batch_size: 4
- weight_decay: 0.01
- num_train_epochs: 4
- num_devices: 4
- learning_rate: 4e-5 |
jan-hq/supermario-slerp-v3 | jan-hq | 2024-03-04T13:36:11Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-12T11:01:00Z | ---
language:
- en
license: apache-2.0
model-index:
- name: supermario-slerp-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.28
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.77
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/supermario-slerp-v3
name: Open LLM Leaderboard
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model uses the `Slerp` merge method from our 2 best models in 12th Dec:
1. [supermario-slerp-v2](https://huggingface.co/janhq/supermario-slerp-v2)
2. [supermario-v2](https://huggingface.co/janhq/supermario-v2)
- base model: [supermario-slerp-v2](https://huggingface.co/janhq/supermario-slerp-v2)
The yaml config file for this model is here:
```yaml
slices:
- sources:
- model: janhq/supermario-slerp-v2
layer_range: [0, 32]
- model: janhq/supermario-v2
layer_range: [0, 32]
merge_method: slerp
base_model: janhq/supermario-slerp-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit](https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
-
[SLERP](https://github.com/Digitous/LLM-SLERP-Merge)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__supermario-slerp-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.22|
|AI2 Reasoning Challenge (25-Shot)|69.28|
|HellaSwag (10-Shot) |86.71|
|MMLU (5-Shot) |65.11|
|TruthfulQA (0-shot) |61.77|
|Winogrande (5-shot) |80.51|
|GSM8k (5-shot) |69.98|
|
senseable/garten2-7b | senseable | 2024-03-04T23:24:52Z | 736 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"qlora",
"dto",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-11T05:04:23Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- qlora
- dto
base_model:
- mistralai/Mistral-7B-v0.1
model-index:
- name: garten2-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/garten2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/garten2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/garten2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/garten2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/garten2-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/garten2-7b
name: Open LLM Leaderboard
---
# Details
Introducing Garten2-7B, a cutting-edge, small 7B all-purpose Language Model (LLM), designed to redefine the boundaries of artificial intelligence in natural language understanding and generation. Garten2-7B stands out with its unique architecture, expertly crafted to deliver exceptional performance in a wide array of tasks, from conversation to content creation.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_senseable__garten2-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.65|
|AI2 Reasoning Challenge (25-Shot)|69.37|
|HellaSwag (10-Shot) |87.54|
|MMLU (5-Shot) |65.44|
|TruthfulQA (0-shot) |59.50|
|Winogrande (5-shot) |84.69|
|GSM8k (5-shot) |69.37|
|
luqmanxyz/LelaStarling-7B | luqmanxyz | 2024-03-04T14:32:49Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Lelantos-DPO-7B",
"berkeley-nest/Starling-LM-7B-alpha",
"conversational",
"base_model:SanjiWatsuki/Lelantos-DPO-7B",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-20T23:14:53Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Lelantos-DPO-7B
- berkeley-nest/Starling-LM-7B-alpha
base_model:
- SanjiWatsuki/Lelantos-DPO-7B
- berkeley-nest/Starling-LM-7B-alpha
model-index:
- name: LelaStarling-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.73
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/LelaStarling-7B
name: Open LLM Leaderboard
---
# LelaStarling-7B
LelaStarling-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Lelantos-DPO-7B](https://huggingface.co/SanjiWatsuki/Lelantos-DPO-7B)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Lelantos-DPO-7B
layer_range: [0, 32]
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Lelantos-DPO-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "luqmanxyz/LelaStarling-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_luqmanxyz__LelaStarling-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.45|
|AI2 Reasoning Challenge (25-Shot)|67.58|
|HellaSwag (10-Shot) |86.33|
|MMLU (5-Shot) |64.98|
|TruthfulQA (0-shot) |57.73|
|Winogrande (5-shot) |80.98|
|GSM8k (5-shot) |71.11|
|
jsfs11/WestOrcaNeural-V2-DARETIES-7B | jsfs11 | 2024-03-04T00:44:07Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"senseable/WestLake-7B-v2",
"mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T09:40:34Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
- senseable/WestLake-7B-v2
- mlabonne/NeuralBeagle14-7B
model-index:
- name: WestOrcaNeural-V2-DARETIES-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/WestOrcaNeural-V2-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/WestOrcaNeural-V2-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/WestOrcaNeural-V2-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.81
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/WestOrcaNeural-V2-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/WestOrcaNeural-V2-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/WestOrcaNeural-V2-DARETIES-7B
name: Open LLM Leaderboard
---
# WestOrcaNeural-V2-DARETIES-7B
WestOrcaNeural-V2-DARETIES-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP](https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
parameters:
density: 0.6
weight: 0.35
- model: senseable/WestLake-7B-v2
parameters:
density: 0.65
weight: 0.4
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.55
weight: 0.25
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__WestOrcaNeural-V2-DARETIES-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.53|
|AI2 Reasoning Challenge (25-Shot)|72.10|
|HellaSwag (10-Shot) |88.21|
|MMLU (5-Shot) |64.64|
|TruthfulQA (0-shot) |67.81|
|Winogrande (5-shot) |83.74|
|GSM8k (5-shot) |70.66|
|
ChuckMcSneed/WinterGoddess-1.4x-70b-32k | ChuckMcSneed | 2024-03-04T13:44:30Z | 736 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T08:40:12Z | ---
license: llama2
tags:
- mergekit
- merge
model-index:
- name: WinterGoddess-1.4x-70b-32k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.87
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/WinterGoddess-1.4x-70b-32k
name: Open LLM Leaderboard
---
This is a 32k version of Sao10K/WinterGoddess-1.4x-70B-L2, extended using method discussed [here](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2).
# Quants
Thanks for GGUF, [@Nexesenex](https://huggingface.co/Nexesenex)!
- [GGUF](https://huggingface.co/Nexesenex/ChuckMcSneed_WinterGoddess-1.4x-70b-32k-iMat.GGUF)
# Benchmarks
### NeoEvalPlusN_benchmark
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark)
| Test name | WinterGoddess | WinterGoddess-32k |
| ---------- | ---------- | ------- |
| B | 2 | 2.5 |
| C | 1.5 | 2 |
| D | 3 | 0 |
| S | 2.75 | 1.5 |
| P | 5.5 | 2.25 |
| Total | 14.75 | 8.25 |
### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|---------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|Sao10K/WinterGoddess-1.4x-70B-L2 |73.23 |72.78|90.11 |71.12|65.76 |85 |54.59|
|ChuckMcSneed/WinterGoddess-1.4x-70b-32k|69.4 |71.16|89.12 |66.42|63.87 |82.56 |43.29|
|Difference |3.83 |1.62 |0.99 |4.7 |1.89 |2.44 |11.3 |
Here the losses seem far less brutal than on my bench. It seems that extending with LongLORA kills MMLU and GSM8K performance.
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__WinterGoddess-1.4x-70b-32k)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.40|
|AI2 Reasoning Challenge (25-Shot)|71.16|
|HellaSwag (10-Shot) |89.12|
|MMLU (5-Shot) |66.42|
|TruthfulQA (0-shot) |63.87|
|Winogrande (5-shot) |82.56|
|GSM8k (5-shot) |43.29|
|
Aabbhishekk/llama2-7b-function-calling-slerp | Aabbhishekk | 2024-05-21T17:10:45Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"meta-llama/Llama-2-7b-hf",
"Trelis/Llama-2-7b-chat-hf-function-calling-v3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T13:24:44Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- meta-llama/Llama-2-7b-hf
- Trelis/Llama-2-7b-chat-hf-function-calling-v3
model-index:
- name: llama2-7b-function-calling-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aabbhishekk/llama2-7b-function-calling-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aabbhishekk/llama2-7b-function-calling-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aabbhishekk/llama2-7b-function-calling-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.32
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aabbhishekk/llama2-7b-function-calling-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aabbhishekk/llama2-7b-function-calling-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 20.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aabbhishekk/llama2-7b-function-calling-slerp
name: Open LLM Leaderboard
---
# llama2-7b-function-calling-slerp
llama2-7b-function-calling-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
* [Trelis/Llama-2-7b-chat-hf-function-calling-v3](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: meta-llama/Llama-2-7b-hf
layer_range: [0, 32]
- model: Trelis/Llama-2-7b-chat-hf-function-calling-v3
layer_range: [0, 32]
merge_method: slerp
base_model: meta-llama/Llama-2-7b-hf
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aabbhishekk__llama2-7b-function-calling-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |53.53|
|AI2 Reasoning Challenge (25-Shot)|55.46|
|HellaSwag (10-Shot) |79.50|
|MMLU (5-Shot) |50.32|
|TruthfulQA (0-shot) |40.32|
|Winogrande (5-shot) |75.22|
|GSM8k (5-shot) |20.39|
|
Gille/StrangeMerges_18-7B-dare_ties | Gille | 2024-03-04T21:55:25Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_17-7B-dare_ties",
"teknium/OpenHermes-2.5-Mistral-7B",
"base_model:Gille/StrangeMerges_17-7B-dare_ties",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-01T01:23:30Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_17-7B-dare_ties
- teknium/OpenHermes-2.5-Mistral-7B
base_model:
- Gille/StrangeMerges_17-7B-dare_ties
- teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: StrangeMerges_18-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_18-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_18-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_18-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_18-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_18-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_18-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_18-7B-dare_ties
StrangeMerges_18-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_17-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_17-7B-dare_ties)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_17-7B-dare_ties
# no parameters necessary for base model
- model: Gille/StrangeMerges_17-7B-dare_ties
parameters:
density: 0.5
weight: 0.4
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.5
weight: 0.6
merge_method: dare_ties
base_model: Gille/StrangeMerges_17-7B-dare_ties
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_18-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_18-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.06|
|AI2 Reasoning Challenge (25-Shot)|64.08|
|HellaSwag (10-Shot) |84.37|
|MMLU (5-Shot) |63.65|
|TruthfulQA (0-shot) |52.17|
|Winogrande (5-shot) |77.27|
|GSM8k (5-shot) |60.80|
|
Inv/Konstanta-7B | Inv | 2024-03-04T18:20:38Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"maywell/PiVoT-0.1-Evil-a",
"mlabonne/NeuralOmniBeagle-7B-v2",
"roleplay",
"rp",
"not-for-all-audiences",
"en",
"base_model:maywell/PiVoT-0.1-Evil-a",
"base_model:mlabonne/NeuralOmniBeagle-7B-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T21:08:35Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- maywell/PiVoT-0.1-Evil-a
- mlabonne/NeuralOmniBeagle-7B-v2
- roleplay
- rp
- not-for-all-audiences
base_model:
- maywell/PiVoT-0.1-Evil-a
- mlabonne/NeuralOmniBeagle-7B-v2
model-index:
- name: Konstanta-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.43
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-7B
name: Open LLM Leaderboard
---
# Konstanta-7B
Konstanta-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [maywell/PiVoT-0.1-Evil-a](https://huggingface.co/maywell/PiVoT-0.1-Evil-a)
* [mlabonne/NeuralOmniBeagle-7B-v2](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B-v2)
This is a test merge that is supposed to improve Kunoichi by merging it with new Beagle model and PiVoT Evil, which both show good performance. Even though the model's name is in Russian, it is not really capable of properly using it, as it was not the main goal of the model.
## 🧩 Configuration
```yaml
merge_method: dare_ties
dtype: bfloat16
parameters:
int8_mask: true
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
models:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
- model: maywell/PiVoT-0.1-Evil-a
parameters:
density: 0.65
weight: 0.15
- model: mlabonne/NeuralOmniBeagle-7B-v2
parameters:
density: 0.85
weight: 0.45
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Inv/Konstanta-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Inv__Konstanta-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.54|
|AI2 Reasoning Challenge (25-Shot)|70.05|
|HellaSwag (10-Shot) |87.50|
|MMLU (5-Shot) |65.06|
|TruthfulQA (0-shot) |65.43|
|Winogrande (5-shot) |82.16|
|GSM8k (5-shot) |71.04|
|
NeverSleep/MiquMaid-v2-2x70B-DPO | NeverSleep | 2024-02-07T20:16:06Z | 736 | 16 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"not-for-all-audiences",
"nsfw",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-06T22:12:48Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## MiquMaid v2 2x70 DPO
Check out our blogpost about this model series [Here!](https://ikaridevgit.github.io/index.html?blog=blogid-6&bo=true#Miqu-base) - Join our Discord server [Here!](https://discord.gg/Bb8pRUXy3Z)
<center>[<a href="https://huggingface.co/NeverSleep/MiquMaid-v2-70B">V2-70B</a> - <a href="https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO">V2-70B-DPO</a> - <a href="https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B">V2-2x70B</a> - <a href="https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO">V2-2x70B-DPO</a>]
</br>
<div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/Wbzwoko-IZbOJfvPaImre.png" style="display: block; margin: auto;">
</div></center>
This model uses the Alpaca **prompting format**
Then, we have done a MoE, made of MiquMaid-v2-70B-DPO and Miqu-70B-DPO base, making the model using the finetune AND the base model for each token, working together.
The two model have been trained on DPO for uncensoring, more info on Miqu-70B-DPO [here](https://huggingface.co/Undi95/Miqu-70B-Alpaca-DPO-GGUF)
We have seen a significant improvement, so we decided to share that, even if the model is very big.
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of MiquMaid-v2-2x70B-DPO.
Switch: [FP16](https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO) - [GGUF](https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO-GGUF)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP)
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
## DPO training data used:
- [ToxicDPOqa](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicDPOqa)
- [toxic-dpo-v0.1-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
tyson0420/stack_llama-clang | tyson0420 | 2024-02-15T01:48:29Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-07T02:39:48Z | ---
library_name: transformers
license: bigscience-openrail-m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
paulml/OmniBeagleSquaredMBX-v3-7B | paulml | 2024-02-12T09:23:08Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"paulml/OmniBeagleMBX-v3-7B",
"flemmingmiguel/MBX-7B-v3",
"base_model:paulml/OmniBeagleMBX-v3-7B",
"base_model:flemmingmiguel/MBX-7B-v3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-09T15:38:29Z | ---
tags:
- merge
- mergekit
- lazymergekit
- paulml/OmniBeagleMBX-v3-7B
- flemmingmiguel/MBX-7B-v3
base_model:
- paulml/OmniBeagleMBX-v3-7B
- flemmingmiguel/MBX-7B-v3
license: cc-by-nc-4.0
---
# As of February 12th, 2024, this model ranks number one in the Arc Challenge for 7B models.
# OmniBeagleSquaredMBX-v3-7B
OmniBeagleSquaredMBX-v3-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/OmniBeagleMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleMBX-v3-7B)
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: paulml/OmniBeagleMBX-v3-7B
layer_range: [0, 32]
- model: flemmingmiguel/MBX-7B-v3
layer_range: [0, 32]
merge_method: slerp
base_model: flemmingmiguel/MBX-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/OmniBeagleSquaredMBX-v3-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
paulml/OmniBeagleSquaredMBX-v3-7B-v2 | paulml | 2024-02-09T22:34:33Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"paulml/OmniBeagleMBX-v3-7B",
"flemmingmiguel/MBX-7B-v3",
"base_model:paulml/OmniBeagleMBX-v3-7B",
"base_model:flemmingmiguel/MBX-7B-v3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-09T19:36:55Z | ---
tags:
- merge
- mergekit
- lazymergekit
- paulml/OmniBeagleMBX-v3-7B
- flemmingmiguel/MBX-7B-v3
base_model:
- paulml/OmniBeagleMBX-v3-7B
- flemmingmiguel/MBX-7B-v3
license: cc-by-nc-4.0
---
# OmniBeagleSquaredMBX-v3-7B-v2
OmniBeagleSquaredMBX-v3-7B-v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/OmniBeagleMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleMBX-v3-7B)
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: paulml/OmniBeagleMBX-v3-7B
layer_range: [0, 32]
- model: flemmingmiguel/MBX-7B-v3
layer_range: [0, 32]
merge_method: slerp
base_model: paulml/OmniBeagleMBX-v3-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/OmniBeagleSquaredMBX-v3-7B-v2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ericpolewski/ASTS-PFAF | ericpolewski | 2024-02-10T07:34:23Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T07:12:06Z | ---
license: mit
---
Ok so this guy offers [this challenge](https://www.reddit.com/r/ArtificialInteligence/comments/1akestf/day_3_prove_i_am_full_of_bs_and_my_dataset_doesnt/) and I don't actually have a lot going on in my life right now. So I'm like fine. Your idea looks interesting. I have no idea why you're spamming it. It does not appear you make any money from this. Why would you offer to pay for our fine-tuning if we don't like the results after fine-tuning on your data? Does this thing trojan horse in some crazy thing that lets you control all robots later even though it improves performance now? I dunno. I don't even know if I'm doing this right. It says fine-tune your model on it. But I don't know if that means make my model first and then fine-tune using his thing or if I can just sprinkle it into mine and cross my fingers? I'm just going to sprinkle in his data and just cross my fingers.
Now, every time I ever learn a new tech that can even conceivably be used to predict the stock market, I try to apply it to it. I fail every single time. It's fine. It's hard to predict the future. I'm going to try to make a bot that tells me things about this pre-revenue space company I'm totally gambling on. I don't know what I hope to achieve from the bot itself. Probably to try to guess the future and predict the stock market duh. The actual point of the bot doesn't matter. He just said if we're not happy with it or quality doesn't improve or something, he'll refund us the training fees we spent. Which to me means I can just trudge into this with no defined goals other than if I feel like I like the outcome and definitely be guaranteed that this thing will be able to tell the future. I didn't see any small print on his offer. Nor did I check. A deal is a deal.
I pulled the data for the company from various places and managed to get myself banned from the official AST Spacemobile website (the company I'm obsessed with) for trying to scrape it (sorry!). I hope that automatically expires at some point. Oh well. It's kinda embarrassing. I own part of the company. And I'm banned from the site. I don't have much from their site obviously but I grabbed a bunch of news and financial data. Maybe not a bunch. About maybe November-ish on. I did the dataset prep for that to turn it into a bot (I know! I know! You all asked for guides in my last post on how to build a dataset and fine-tune for model performance past just format. I PROMISE that's on the way. I'm writing it!) and then converted his dataset CSV into the Alpaca instruct/response form and just kinda manually copy/pasted chunks in-between my data. The internet seems to insist the order but doesn't matter but in my experience the loss can explode if the data differs too much in too large of chunks. You need to randomize a bit if you're training on flat file like I tend to do. Also his data was in a parquet file and that was a whole thing so here's the code to turn that into the Alpaca format:
import pandas as pd
# Read the Parquet file
df = pd.read_parquet('0000.parquet')
# Open the output file
with open('pfaf.txt', 'w', encoding='utf-8') as f:
# Iterate through each row in the DataFrame
for index, row in df.iterrows():
# Write the instruction and response to the file
f.write("### Instruction:\n")
f.write(row['Prompt'] + '\n\n')
f.write("### Response:\n")
f.write(row['Response'] + '</s>' + '\n\n')
The CSV version had my parsers all vomiting so I had to make that.
I honestly don't expect this to go well. I'm kinda just doing this as a nerd joke/challenge. I'm not doing anything to improve the chances of success of this data but I think that'll be the best test right? I think? Unless you're supposed to fine-tune on it after you're done. But that'd be bizarre? You'd have all sorts of catastrophic forgetting. I've got a couple of these SME bots on the leaderboard so I'm just going to see how it does there other than I guess just playing with it. If it increases my GSM8K or whatever score it was, I'll be paying attention. My SMEs are crashing and burning on that score for some reason. At least that gives me some sort of hard metric. I submitted it. We'll see. You can check for yourself whenever it finishes. I don't have any of the benchmarks locally. I just dump to the leaderboard as my benchmark. They said they don't mind in one of their posts. The quality is going to be a little questionable since I can't grab their website info. Sounds like a guy-offering-the-guarantee's problem, though. And I fine-tuned on the GPTQ model instead of the FP16 model loaded in 4-bit mode/bf16. Not because there was a technical reason. The GPTQ model just loaded faster. Not my problem if it's a worse idea to train on that. That's a problem for PFAF moneybags over there.
Here. It's done.

I talked to it. It's ok I guess. I'm a little suspicious of its ability to literally tell the future. I'm still not rich and I don't know when I will be. I was expecting to be able to tell the future and not worry about a highly risky investment and all I got was a bot that might score better on objective benchmarks. And I don't even get to find that out until probably tomorrow. Maybe days if the leaderboard breaks again. I'm gonna take the refund I'm thinking. I need the money. Predicting the stock market failed once again. I'm willing to split the liability a little, though. I mean I didn't even ask the guy any of the parameters. I just started doing it. Some of that data was mine. Let's just meet in the middle. Let's figure out the cost structure:
I trained from my workstation. I have 2x 3090's and an AMD 5900x. Chicago power is 15¢/kWh. Each 3090 draw about 350 watts and the rest of the system probably draws maybe 200 watts or so. But then my room gets hot and I have to turn on the overhead fan and kick on the HVAC vent fan with the windows open or else my place gets really hot even in the middle of winter. We'll call it a kilowatt even since we're not billing wear and tear on the cards. I think you have to depreciate those by time anyway and not usage. At least for tax purposes. Anyway, dataset prep and training took about 3 hours in-total. Looking at raw data sizes, the pfaf data was about 500kb and my data around 2.1mb. So if we calculate that out, we get 3 * 0.15 * (500/(2100+500)) = 0.0865 to get the portion of the fine-tuning attributable to PFAF (someone check my math. I'm stoned.). I think that I feel like this guy owes me 9 cents, but I'm not gonna be petty about it. You can't give fractions of a penny. We'll call it 8 cents. If the scores don't improve.
(We'll see probably tomorrow or so if the leaderboard updates if this dataset does anything worth exploring just by dumping it in as suggested by the guy. Compare it to TacoBeLLM and Palworld-SME-13b on the leaderboard for bots I made similar ways.) |
vicgalle/Miqu-6B-truthy | vicgalle | 2024-03-04T12:08:46Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"miqu",
"conversational",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-11T12:59:27Z | ---
license: apache-2.0
tags:
- miqu
datasets:
- jondurbin/truthy-dpo-v0.1
model-index:
- name: Miqu-6B-truthy
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 27.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Miqu-6B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 26.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Miqu-6B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Miqu-6B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.63
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Miqu-6B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Miqu-6B-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Miqu-6B-truthy
name: Open LLM Leaderboard
---
## Miqu-6B-truthy
A truthfully Miqu of 6B parameters, as an experiment.
```
"results": {
"truthfulqa_mc": {
"mc1": 0.2521419828641371,
"mc1_stderr": 0.01520152224629995,
"mc2": 0.5051887026752994,
"mc2_stderr": 0.016738600540275827
}
},
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__Miqu-6B-truthy)
| Metric |Value|
|---------------------------------|----:|
|Avg. |30.28|
|AI2 Reasoning Challenge (25-Shot)|27.65|
|HellaSwag (10-Shot) |26.71|
|MMLU (5-Shot) |27.04|
|TruthfulQA (0-shot) |50.63|
|Winogrande (5-shot) |49.64|
|GSM8k (5-shot) | 0.00|
|
Yuma42/KangalKhan-SharpEmerald-7B | Yuma42 | 2024-03-05T10:55:55Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"argilla/CapybaraHermes-2.5-Mistral-7B",
"argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"conversational",
"en",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T20:03:41Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
base_model:
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
model-index:
- name: KangalKhan-SharpEmerald-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-SharpEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-SharpEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-SharpEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.52
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-SharpEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-SharpEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-SharpEmerald-7B
name: Open LLM Leaderboard
---
# KangalKhan-SharpEmerald-7B
KangalKhan-SharpEmerald-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
* [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: teknium/OpenHermes-2.5-Mistral-7B
# No parameters necessary for base model
- model: argilla/CapybaraHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.5
- model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.5
merge_method: dare_ties
base_model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-SharpEmerald-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-SharpEmerald-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.86|
|AI2 Reasoning Challenge (25-Shot)|66.72|
|HellaSwag (10-Shot) |85.40|
|MMLU (5-Shot) |63.21|
|TruthfulQA (0-shot) |56.52|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |62.77|
|
uproai/RosMistral-2x7B | uproai | 2024-02-26T11:51:56Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"uproai/RosMistral-2x7B",
"NeverSleep/Noromaid-7b-v0.2",
"base_model:uproai/RosMistral-2x7B",
"base_model:NeverSleep/Noromaid-7b-v0.2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T08:14:01Z | ---
tags:
- merge
- mergekit
- lazymergekit
- uproai/RosMistral-2x7B
- NeverSleep/Noromaid-7b-v0.2
base_model:
- uproai/RosMistral-2x7B
- NeverSleep/Noromaid-7b-v0.2
---
**Deprecated**: please check out [uproai/Rose-2x7B](https://huggingface.co/uproai/Rose-2x7B) or [uproai/Rose-2x7B-GGUF](https://huggingface.co/uproai/Rose-2x7B-GGUF)
|
Eurdem/megatron_2.1_MoE_2x7B | Eurdem | 2024-03-30T05:59:23Z | 736 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T15:56:25Z | ---
license: apache-2.0
tags:
- moe
- merge
model-index:
- name: megatron_2.1_MoE_2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_2.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_2.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_2.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 78.2
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_2.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_2.1_MoE_2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Eurdem/megatron_2.1_MoE_2x7B
name: Open LLM Leaderboard
---
# megatron_2.1_MoE_2x7B
megatron_2.1_MoE_2x7B is a Mixure of Experts (MoE).
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Eurdem/megatron_2.1_MoE_2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Tell me about AI."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=1000, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Eurdem__megatron_2.1_MoE_2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.64|
|AI2 Reasoning Challenge (25-Shot)|72.95|
|HellaSwag (10-Shot) |88.94|
|MMLU (5-Shot) |64.56|
|TruthfulQA (0-shot) |78.20|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |70.66|
|
splm/openchat-spin-slimorca-iter0 | splm | 2024-02-22T19:01:02Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T20:35:01Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mihaiii/Cluj-Napoca-0.2 | Mihaiii | 2024-02-28T13:29:06Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:Mihaiii/Pallas-0.5",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-22T15:07:54Z | ---
base_model: Mihaiii/Pallas-0.5
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
The Cluj-Napoca series is mostly an experiment.
[Details:](https://twitter.com/m_chirculescu/status/1760719837528023549?t=XK67X_iu5hkt9p430nRmkA&s=19)

# Steps to replicate:
Use [laserQlora.ipynb](https://github.com/cognitivecomputations/laserRMT/blob/main/laserQlora.ipynb) from [cognitivecomputations/laserRMT](https://github.com/cognitivecomputations/laserRMT) to determine which layers should be eliminated.
Adapt the script for `Mihaiii/Pallas-0.5` by replacing `model_name = "mistralai/Mistral-7B-v0.1"` with `model_name = "Mihaiii/Pallas-0.5"` and `layer_numbers = list(range(31, -1, -1))` with `layer_numbers = list(range(59, -1, -1))`, [59 being the last recurrent layer index Pallas-0.5 has](https://huggingface.co/Mihaiii/Pallas-0.5?show_tensors=true).
<details>
<summary>Click to see the result you'll receive</summary>
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.0.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.0.self_attn.k_proj = 0.34616405651386795
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.0.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.0.self_attn.v_proj = 15.35865625718883
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.1.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.1.self_attn.k_proj = 0.7206548634038767
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.1.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.1.self_attn.v_proj = 3.2591477935986704
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.2.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.2.self_attn.k_proj = 0.5311484408046
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.2.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.2.self_attn.v_proj = 5.109442630946979
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.3.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.3.self_attn.k_proj = 0.4341506575442939
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.3.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.3.self_attn.v_proj = 7.519101868970723
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.4.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.4.self_attn.k_proj = 0.43156326950369167
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.4.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.4.self_attn.v_proj = 3.3721301592636337
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.5.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.5.self_attn.k_proj = 0.4146416750704863
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.5.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.5.self_attn.v_proj = 17.88975706822606
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.6.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.6.self_attn.k_proj = 0.4311999332093549
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.6.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.6.self_attn.v_proj = 32.20151585537659
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.7.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.7.self_attn.k_proj = 0.4152094643425305
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.7.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.7.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.8.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.8.self_attn.k_proj = 0.3623575163597641
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.8.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.8.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.9.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.9.self_attn.k_proj = 0.44609016848062005
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.9.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.9.self_attn.v_proj = 1230.8526493095455
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.10.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.10.self_attn.k_proj = 0.5036779136885361
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.10.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.10.self_attn.v_proj = 1225.9565161503585
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.11.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.11.self_attn.k_proj = 0.8464746929570776
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.11.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.11.self_attn.v_proj = 249.73542526059745
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.12.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.12.self_attn.k_proj = 0.7472833813081716
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.12.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.12.self_attn.v_proj = 1475.741913325959
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.13.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.13.self_attn.k_proj = 0.6900561437886662
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.13.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.13.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.14.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.14.self_attn.k_proj = 0.879488259102746
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.14.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.14.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.15.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.15.self_attn.k_proj = 0.8212827221029891
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.15.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.15.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.16.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.16.self_attn.k_proj = 0.939714841037408
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.16.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.16.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.17.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.17.self_attn.k_proj = 1.1122911986074888
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.17.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.17.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.18.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.18.self_attn.k_proj = 0.9121383292266945
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.18.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.18.self_attn.v_proj = 788.8261618785485
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.19.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.19.self_attn.k_proj = 0.9715624891930363
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.19.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.19.self_attn.v_proj = 565.6062067127933
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.20.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.20.self_attn.k_proj = 0.9658735932092948
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.20.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.20.self_attn.v_proj = 173.68213657649758
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.21.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.21.self_attn.k_proj = 1.0208128327398873
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.21.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.21.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.22.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.22.self_attn.k_proj = 0.8767115421156565
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.22.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.22.self_attn.v_proj = 1690.9373147427925
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.23.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.23.self_attn.k_proj = 0.9917777373667964
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.23.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.23.self_attn.v_proj = 1506.6032364420512
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.24.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.24.self_attn.k_proj = 1.0207218414788868
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.24.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.24.self_attn.v_proj = 146.62625418833036
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.25.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.25.self_attn.k_proj = 0.9707599015919387
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.25.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.25.self_attn.v_proj = 257.9292799096513
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.26.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.26.self_attn.k_proj = 0.8617543423891454
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.26.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.26.self_attn.v_proj = 34.81369296505358
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.27.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.27.self_attn.k_proj = 0.8801045544411704
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.27.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.27.self_attn.v_proj = 10.606090192242721
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.28.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.28.self_attn.k_proj = 0.7758175782347406
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.28.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.28.self_attn.v_proj = 15.045700293750533
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.29.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.29.self_attn.k_proj = 0.6950855099687395
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.29.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.29.self_attn.v_proj = 8.911400115023547
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.30.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.30.self_attn.k_proj = 0.8502166964551224
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.30.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.30.self_attn.v_proj = 39.16454811852842
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.31.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.31.self_attn.k_proj = 0.8114261748000102
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.31.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.31.self_attn.v_proj = 15.232121720528768
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.32.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.32.self_attn.k_proj = 0.8171534747659152
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.32.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.32.self_attn.v_proj = 44.24568579763897
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.33.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.33.self_attn.k_proj = 1.0559033041558032
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.33.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.33.self_attn.v_proj = 44.04153996123169
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.34.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.34.self_attn.k_proj = 0.8710953229091645
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.34.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.34.self_attn.v_proj = 68.64244557504348
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.35.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.35.self_attn.k_proj = 0.9532579825557792
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.35.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.35.self_attn.v_proj = 77.22896365544904
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.36.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.36.self_attn.k_proj = 0.7857943800481151
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.36.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.36.self_attn.v_proj = 41.764676631172684
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.37.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.37.self_attn.k_proj = 0.9566414094295352
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.37.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.37.self_attn.v_proj = 197.90757310006273
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.38.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.38.self_attn.k_proj = 0.8376618883945027
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.38.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.38.self_attn.v_proj = 29.87200982970284
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.39.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.39.self_attn.k_proj = 1.1301710981992348
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.39.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.39.self_attn.v_proj = 1675.7645711321682
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.40.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.40.self_attn.k_proj = 1.5244403389879522
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.40.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.40.self_attn.v_proj = 406.13928327811595
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.41.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.41.self_attn.k_proj = 1.6595441320443285
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.41.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.41.self_attn.v_proj = 630.0868624694981
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.42.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.42.self_attn.k_proj = 1.037089746602981
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.42.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.42.self_attn.v_proj = 5.865115179753161
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.43.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.43.self_attn.k_proj = 1.2167307353377796
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.43.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.43.self_attn.v_proj = 14.493857040997593
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.44.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.44.self_attn.k_proj = 1.3254801456913765
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.44.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.44.self_attn.v_proj = 929.5507948184927
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.45.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.45.self_attn.k_proj = 0.8799221460946477
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.45.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.45.self_attn.v_proj = 138.24150062697706
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.46.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.46.self_attn.k_proj = 1.4364369040069944
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.46.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.46.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.47.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.47.self_attn.k_proj = 1.5039953326988464
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.47.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.47.self_attn.v_proj = 916.1727358213857
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.48.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.48.self_attn.k_proj = 1.3774147345025962
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.48.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.48.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.49.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.49.self_attn.k_proj = 1.496570053548836
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.49.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.49.self_attn.v_proj = 816.8708843069953
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.50.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.50.self_attn.k_proj = 1.1144650796270612
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.50.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.50.self_attn.v_proj = 1013.1682247787271
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.51.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.51.self_attn.k_proj = 3.7913550246540635
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.51.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.51.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.52.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.52.self_attn.k_proj = 1.680754165581029
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.52.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.52.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.53.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.53.self_attn.k_proj = 3.064423507932819
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.53.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.53.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.54.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.54.self_attn.k_proj = 2.3201283603647047
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.54.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.54.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.55.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.55.self_attn.k_proj = 3.9188910045391916
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.55.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.55.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.56.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.56.self_attn.k_proj = 2.8077111768801046
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.56.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.56.self_attn.v_proj = inf
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.57.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.57.self_attn.k_proj = 2.24360670610018
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.57.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.57.self_attn.v_proj = 381.6422403317739
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.58.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.58.self_attn.k_proj = 1.6971178916519492
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.58.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.58.self_attn.v_proj = 182.5246839720645
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.59.self_attn.k_proj
Signal to Noise Ratio at layer model.layers.59.self_attn.k_proj = 1.7869714644485775
**************************************************
**************************************************
Calculating Signal to Noise Ratio at layer model.layers.59.self_attn.v_proj
Signal to Noise Ratio at layer model.layers.59.self_attn.v_proj = 73.06378101191355
**************************************************
Finished laserRMT scanning.
</details>
Then look for the layer indexes where `self_attn.v_proj` snr is Infinity (here is the list: 56, 55, 54, 53, 52, 51, 48, 46, 21, 17, 16, 15, 14, 13, 8, 7) and eliminate those layers using [mergekit](https://github.com/arcee-ai/mergekit).
Here is the mergekit config:
```yml
slices:
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [0, 7]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [9, 13]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [18, 21]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [22, 46]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [47, 48]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [49, 51]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [57, 60]
merge_method: passthrough
dtype: bfloat16
```
The resulted model (outputted by mergekit) is this model (Cluj-Napoca-0.2).
Cluj-Napoca versions 0.3 - 0.5 (including) are finetuned each having previous version as base.
Cluj-Napoca version 0.6 is a pruned down version of 0.5.
Cluj-Napoca version 0.7 - 0.11 (including) are finetuned each having previous version as base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
``` |
nnethercott/llava-v1.5-7b-hf-vicuna | nnethercott | 2024-03-06T14:46:39Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T21:24:09Z | ---
license: llama2
model-index:
- name: llava-v1.5-7b-hf-vicuna
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b-hf-vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 76.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b-hf-vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b-hf-vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.86
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b-hf-vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b-hf-vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b-hf-vicuna
name: Open LLM Leaderboard
---
## Model details
**Motivation**
This models contains the fine-tuned weights from `llava-hf/llava-1.5-7b-hf` so LLM benchmarking can be done.
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nnethercott__llava-v1.5-7b-hf-vicuna)
| Metric |Value|
|---------------------------------|----:|
|Avg. |52.28|
|AI2 Reasoning Challenge (25-Shot)|52.65|
|HellaSwag (10-Shot) |76.09|
|MMLU (5-Shot) |51.68|
|TruthfulQA (0-shot) |45.86|
|Winogrande (5-shot) |72.06|
|GSM8k (5-shot) |15.31|
|
NeuralNovel/Ignis-7B-DPO | NeuralNovel | 2024-02-28T11:18:27Z | 736 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-24T15:29:18Z | ---
license: apache-2.0
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Model Card</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Quicksand', sans-serif;
background-color: #1A202C;
color: #D8DEE9;
margin: 0;
padding: 0; /* Remove default padding */
font-size: 26px;
background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%);
}
p {
padding-left: 10px
}
.container {
width: 100%;
margin: auto;
background-color: rgb(255 255 255 / 1%);
padding: 20px 30px 40px; /* Add padding below the image only */
padding-right: 32px;
border-radius: 12px;
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2);
backdrop-filter: blur(10px);
border: 1px solid rgba(255, 255, 255, 0.05);
background-color: rgb(0 0 0 / 75%) !important;
}
.header h1 {
font-size: 28px;
color: #fff; /* White text color */
margin: 0;
text-shadow:
-1px -1px 0 #000,
1px -1px 0 #000,
-1px 1px 0 #000,
1px 1px 0 #000; /* Black outline */
}
.header {
display: flex;
align-items: center;
justify-content: space-between;
gap: 20px;
}
img {
border-radius: 10px 10px 0 0!important;
padding-left: 0px !important;
}
.header h1 {
font-size: 28px;
color: #ECEFF4;
margin: 0;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
}
.info {
background-color: rgba(255, 255, 255, 0.05);
color: #AEBAC7;
border-radius: 12px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);
font-size: 14px;
line-height: 1.6;
margin-left: 5px;
overflow-x: auto;
margin-top: 20px; /* Adjusted margin */
border: 1px solid rgba(255, 255, 255, 0.05);
transition: background-color 0.6s ease; /* Smooth transition over 0.5 seconds */
}
.info:hover {
}
.info img {
width: 100%;
border-radius: 10px 10px 0 0;
margin-top: -20px; /* Negative margin to overlap container margin */
}
a {
color: #88C0D0;
text-decoration: none;
transition: color 0.3s ease;
position: relative;
}
a:hover {
color: #A3BE8C;
text-decoration: none;
}
a::before {
content: '';
position: absolute;
width: 100%;
height: 2px;
bottom: 0;
left: 0;
background-color: #A3BE8C;
visibility: hidden;
transform: scaleX(0);
transition: all 0.3s ease-in-out;
}
a:hover::before {
visibility: visible;
transform: scaleX(1);
}
.button {
display: inline-block;
background-color: #5E81AC;
color: #E5E9F0;
padding: 10px 20px;
border-radius: 5px;
cursor: pointer;
text-decoration: none;
transition: background-color 0.3s ease;
}
.button:hover {
background-color: #81A1C1;
}
.hf-sanitized.hf-sanitized-oJB5trHYB93-j8lDfGQn3 .container {
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Ignis-7B-DPO</h1>
</div>
<div class="info">
<img src="https://i.ibb.co/C8jZ6FW/OIG3.jpg" style="border-radius: 10px;">
<p><strong>Creator:</strong> <a href="https://huggingface.co/NeuralNovel" target="_blank">NeuralNovel</a></p>
<p><strong>Community Organization:</strong> <a href="https://huggingface.co/ConvexAI" target="_blank">ConvexAI</a></p>
<p><strong>Discord:</strong> <a href="https://discord.gg/rJXGjmxqzS" target="_blank">Join us on Discord</a></p>
</head>
<body>
<div>
<div>
<p><strong>Ignis-7B-DPO</strong> Trained on the Neural-DPO dataset using A-100 80GB.</p>
<p><strong>More Details: </strong></p> <p> Coming Soon</p>
<ul>
</ul>
</div>
</div>
</body> |
NeuralNovel/Ignis-7B-DPO-Laser | NeuralNovel | 2024-03-05T15:56:32Z | 736 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:NeuralNovel/Neural-DPO",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T05:23:35Z | ---
license: apache-2.0
model-index:
- name: Ignis-7B-DPO-Laser
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.19
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Ignis-7B-DPO-Laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.57
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Ignis-7B-DPO-Laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.56
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Ignis-7B-DPO-Laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.24
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Ignis-7B-DPO-Laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.43
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Ignis-7B-DPO-Laser
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.46
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Ignis-7B-DPO-Laser
name: Open LLM Leaderboard
datasets:
- NeuralNovel/Neural-DPO
language:
- en
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Model Card</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Quicksand', sans-serif;
background-color: #1A202C;
color: #D8DEE9;
margin: 0;
padding: 0; /* Remove default padding */
font-size: 26px;
background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%);
}
p {
padding-left: 10px
}
.container {
width: 100%;
margin: auto;
background-color: rgb(255 255 255 / 1%);
padding: 20px 30px 40px; /* Add padding below the image only */
padding-right: 32px;
border-radius: 12px;
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2);
backdrop-filter: blur(10px);
border: 1px solid rgba(255, 255, 255, 0.05);
background-color: rgb(0 0 0 / 75%) !important;
}
.header h1 {
font-size: 28px;
color: #fff; /* White text color */
margin: 0;
text-shadow:
-1px -1px 0 #000,
1px -1px 0 #000,
-1px 1px 0 #000,
1px 1px 0 #000; /* Black outline */
}
.header {
display: flex;
align-items: center;
justify-content: space-between;
gap: 20px;
}
img {
border-radius: 10px 10px 0 0!important;
padding-left: 0px !important;
}
.header h1 {
font-size: 28px;
color: #ECEFF4;
margin: 0;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
}
.info {
background-color: rgba(255, 255, 255, 0.05);
color: #AEBAC7;
border-radius: 12px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);
font-size: 14px;
line-height: 1.6;
margin-left: 5px;
overflow-x: auto;
margin-top: 20px; /* Adjusted margin */
border: 1px solid rgba(255, 255, 255, 0.05);
transition: background-color 0.6s ease; /* Smooth transition over 0.5 seconds */
}
.info:hover {
}
.info img {
width: 100%;
border-radius: 10px 10px 0 0;
margin-top: -20px; /* Negative margin to overlap container margin */
}
a {
color: #88C0D0;
text-decoration: none;
transition: color 0.3s ease;
position: relative;
}
a:hover {
color: #A3BE8C;
text-decoration: none;
}
a::before {
content: '';
position: absolute;
width: 100%;
height: 2px;
bottom: 0;
left: 0;
background-color: #A3BE8C;
visibility: hidden;
transform: scaleX(0);
transition: all 0.3s ease-in-out;
}
a:hover::before {
visibility: visible;
transform: scaleX(1);
}
.button {
display: inline-block;
background-color: #5E81AC;
color: #E5E9F0;
padding: 10px 20px;
border-radius: 5px;
cursor: pointer;
text-decoration: none;
transition: background-color 0.3s ease;
}
.button:hover {
background-color: #81A1C1;
}
.hf-sanitized.hf-sanitized-oJB5trHYB93-j8lDfGQn3 .container {
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Ignis-7B-DPO</h1>
</div>
<div class="info">
<img src="https://i.ibb.co/C8jZ6FW/OIG3.jpg" style="border-radius: 10px;">
<p><strong>Creator:</strong> <a href="https://huggingface.co/NeuralNovel" target="_blank">NeuralNovel</a></p>
<p><strong>Community Organization:</strong> <a href="https://huggingface.co/ConvexAI" target="_blank">ConvexAI</a></p>
<p><strong>Discord:</strong> <a href="https://discord.gg/rJXGjmxqzS" target="_blank">Join us on Discord</a></p>
</head>
<body>
<div>
<div>
<p><strong>Ignis-7B-DPO</strong> Trained on the Neural-DPO dataset using A-100 80GB.</p>
</div>
</body> |
u66u/NeuralJaskier-7b-dpo | u66u | 2024-02-27T09:45:16Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bardsai/jaskier-7b-dpo-v6.1",
"CultriX/NeuralTrix-7B-dpo",
"base_model:bardsai/jaskier-7b-dpo-v6.1",
"base_model:CultriX/NeuralTrix-7B-dpo",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-27T09:34:16Z | ---
tags:
- merge
- mergekit
- lazymergekit
- bardsai/jaskier-7b-dpo-v6.1
- CultriX/NeuralTrix-7B-dpo
base_model:
- bardsai/jaskier-7b-dpo-v6.1
- CultriX/NeuralTrix-7B-dpo
license: mit
---
# NeuralJaskier-7b-dpo
NeuralJaskier-7b-dpo is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [bardsai/jaskier-7b-dpo-v6.1](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: bardsai/jaskier-7b-dpo-v6.1
layer_range: [0, 32]
- model: CultriX/NeuralTrix-7B-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: bardsai/jaskier-7b-dpo-v6.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "u66u/NeuralJaskier-7b-dpo"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
eldogbbhed/NeuralBeagleJaskier | eldogbbhed | 2024-03-08T10:56:41Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"bardsai/jaskier-7b-dpo-v6.1",
"conversational",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:bardsai/jaskier-7b-dpo-v6.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T01:18:56Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- bardsai/jaskier-7b-dpo-v6.1
base_model:
- mlabonne/NeuralBeagle14-7B
- bardsai/jaskier-7b-dpo-v6.1
---
<center><img src='https://i.postimg.cc/zXSnJ8J3/8358efa9-30c7-4c4d-9fdb-42191f501e70.png' width='1024px' height='1024'></center>
# NeuralBeagleJaskier
NeuralBeagleJaskier is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [bardsai/jaskier-7b-dpo-v6.1](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1)
## 🧩 Configuration
```yaml
models:
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.9
weight: 0.5
- model: bardsai/jaskier-7b-dpo-v6.1
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mlabonne/NeuralBeagle14-7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "eldogbbhed/NeuralBeagleJaskier"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jeiku/Mewthree_7B | jeiku | 2024-03-01T08:12:19Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Krisbiantoro/mistral7b_dpo_en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-01T06:56:35Z | ---
base_model:
- Krisbiantoro/mistral7b_dpo_en
library_name: transformers
tags:
- mergekit
- merge
license: other
---
Mewthree

Draws upon the Prodigy lineage with some no robots tossed in for good measure. Dipped its toes in some memerboard essence and added a kiss of BioMistral for anatomy. Applied a DPO LoRA over top.
Seems to do markdown well. It's an overall balanced model with a focus on RP. |
InferenceIllusionist/DarkForest-20B-v2.0-iMat-GGUF | InferenceIllusionist | 2024-04-15T00:00:28Z | 736 | 9 | null | [
"gguf",
"merge",
"not-for-all-audiences",
"iMat",
"license:other",
"region:us"
] | null | 2024-03-01T20:16:19Z | ---
license: other
license_name: microsoft-research-license
tags:
- merge
- not-for-all-audiences
- gguf
- iMat
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# DarkForest 20B v2.0 iMat GGUF
<h4><i>"The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him."- Liu Cixin</i></h4>
Quantized from fp16 with love. Importance Matrix calculated using Q8_0 quant and wiki.train.raw
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
Importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
Original model card can be found [here](https://huggingface.co/TeeZee/DarkForest-20B-v2.0)
<details><summary>Previous Model Card</summary>
Continuation of an ongoing initiative to bring the latest and greatest models to consumer hardware through SOTA techniques that reduce VRAM overhead.
After testing the new important matrix quants for 11b and 8x7b models and being able to run them on machines without a dedicated GPU, we are now exploring the middleground - 20b.
<b>❗❗Need a different quantization/model? Please open a community post and I'll get back to you - thanks ❗❗ </b>
<i>UPDATE 3/4/24: Newer quants ([IQ4_XS](https://github.com/ggerganov/llama.cpp/pull/5747), IQ2_S, etc) are confirmed working in Koboldcpp as of version <b>[1.60](https://github.com/LostRuins/koboldcpp/releases/tag/v1.60)</b> - if you run into any issues kindly let me know.</i>
IQ3_S has been generated after PR [#5829](https://github.com/ggerganov/llama.cpp/pull/5829) was merged. This should provide a significant speed boost even if you are offloading to CPU.
(Credits to [TeeZee](https://huggingface.co/TeeZee/) for the original model and [ikawrakow](https://github.com/ikawrakow) for the stellar work on IQ quants)
</details><br>
---
# DarkForest 20B v2.0

## Model Details
- To create this model two step procedure was used. First a new 20B model was created using [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
and [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3) , deatils of the merge in [darkforest_v2_step1.yml](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step1.yml)
- then [jebcarter/psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B)
- and [TeeZee/BigMaid-20B-v1.0](https://huggingface.co/TeeZee/BigMaid-20B-v1.0) was used to produce the final model, merge config in [darkforest_v2_step2.yml](https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step2.yml)
- The resulting model has approximately 20 billion parameters.
**Warning: This model can produce NSFW content!**
## Results
- main difference to v1.0 - model has much better sense of humor.
- produces SFW nad NSFW content without issues, switches context seamlessly.
- good at following instructions.
- good at tracking multiple characters in one scene.
- very creative, scenarios produced are mature and complicated, model doesn't shy from writing about PTSD, menatal issues or complicated relationships.
- NSFW output is more creative and suprising than typical limaRP output.
- definitely for mature audiences, not only because of vivid NSFW content but also because of overall maturity of stories it produces.
- This is NOT Harry Potter level storytelling.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> |
lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B | lodrick-the-lafted | 2024-03-04T12:22:48Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/Hermes-40K",
"dataset:garage-bAInd/Open-Platypus",
"dataset:jondurbin/airoboros-3.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T09:57:26Z | ---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-40K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
model-index:
- name: Grafted-Hermetic-Platypus-A-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 42.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B/resolve/main/ghp.png>
# Grafted-Hermetic-Platypus-A-2x7B
MoE merge of
- [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B)
- [Hermes-Instruct-7B-v0.2](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-v0.2)
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an pumpkin pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-A-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.23|
|AI2 Reasoning Challenge (25-Shot)|59.30|
|HellaSwag (10-Shot) |82.89|
|MMLU (5-Shot) |62.00|
|TruthfulQA (0-shot) |61.08|
|Winogrande (5-shot) |77.66|
|GSM8k (5-shot) |42.46|
|
Badgids/Gonzo-Code-7B | Badgids | 2024-03-02T18:08:00Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO",
"base_model:Nondzu/Mistral-7B-Instruct-v0.2-code-ft",
"base_model:xingyaoww/CodeActAgent-Mistral-7b-v0.1",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T17:08:42Z | ---
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
- Nondzu/Mistral-7B-Instruct-v0.2-code-ft
- xingyaoww/CodeActAgent-Mistral-7b-v0.1
- beowolx/MistralHermes-CodePro-7B-v1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# Gonzo-Code-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO) as a base.
### Models Merged
The following models were included in the merge:
* [Nondzu/Mistral-7B-Instruct-v0.2-code-ft](https://huggingface.co/Nondzu/Mistral-7B-Instruct-v0.2-code-ft)
* [xingyaoww/CodeActAgent-Mistral-7b-v0.1](https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1)
* [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
# No parameters necessary for base model
- model: xingyaoww/CodeActAgent-Mistral-7b-v0.1
parameters:
density: 0.53
weight: 0.4
- model: Nondzu/Mistral-7B-Instruct-v0.2-code-ft
parameters:
density: 0.53
weight: 0.3
- model: beowolx/MistralHermes-CodePro-7B-v1
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
parameters:
int8_mask: true
dtype: bfloat16
``` |
cloudyu/mistral_11B_instruct_v0.1 | cloudyu | 2024-03-04T15:32:31Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-04T15:03:44Z | ---
license: cc-by-4.0
---
```
this is a 48 layers model based on mistral architecture
sft by vicgalle/alpaca-gpt4.
template is "{instruction} {inputs} \n {output}"
```
example output
```
<s> write me a story aboyt yosemite.
Here is a short story about the beauty and wonder of Yosemite National Park:
As the sun rose over the mountains, its warm golden rays illuminated the once- shadowed walls of the grandeur that was Yosemite. The early morning hush was broken by the sound of birds singing and the gentle rustling of leaves as the wind blowing through the trees.
In the heart of this natural wonder, there lived a young girl named Sarah. She had grown up in the village at the edge of the park, and her parents were some of the few who had made their homes there. They were farmers, but their real love was for the beauty of the land around them.
One day, while exploring the woods, Sarah stumbled upon a hidden clearing. It was filled with wildflowers, and in the center stood an old oak tree, its branches stretching out to provide shelter to many bird species. As she sat down under the tree, taking in the beauty of her surroundings, she heard a soft, melodious call.
It was a baby deer, and it had fallen into a narrow stream. Its mother was frantic with worry, calling out to her child from the shore. Sarah quickly ran to help, and with the help of a group of travelers passing through the park, they managed to save the baby.
From then on, Sarah would often visit the clearing, and soon, the animals began to trust her. She would sit and listen to the birds chirp, watch the butterflies flutter by, and even play with the curious foxes.
Years went by, and Sarah grew up to become a renowned conservationist, dedicated to preserving the beauty of nature. And so, thanks to her efforts, generations of people have been able to explore and appreciate the wonder of Yosemite.
The sun set over the mountains, casting long shadows across the landscape. As Sarah walked back to her village, she knew that she would never forget the memories she had made within those mountains, and that the magic of Yosemite would always remain with her.
And so, the cycle of life continued, with each day bringing new adventures and discoveries within the vast expanse of the beautiful and timeless Yosemite National Park.
``` |
ChaoticNeutrals/Eris_Remix_7B | ChaoticNeutrals | 2024-03-05T19:27:28Z | 736 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T01:34:48Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---

# Remix
Quants Thanks to Lewdiculus: https://huggingface.co/Lewdiculous/Eris_Remix_7B-GGUF-IQ-Imatrix
Exl2 bpw here: https://huggingface.co/Test157t/ChaoticNeutrals-Eris_Remix_7B-exl2-5bpw
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: SpecialEdition
layer_range: [0, 32]
- model: Remix
layer_range: [0, 32]
merge_method: slerp
base_model: SpecialEdition
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
cloudyu/mistral_28B_instruct_v0.1 | cloudyu | 2024-03-05T03:27:34Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T02:05:04Z | ---
license: cc-by-4.0
---
```
this is a 128 layers model based on mistral architecture
sft by vicgalle/alpaca-gpt4.
template is "{instruction} {inputs} \n {output}"
```
example output
```
<s> write a story about new yrok city.
Once upon a time, in the heart of the United States, there was a city known as New York City. It was a bustling metropolis, teeming with people from all walks of life. The streets were alive with energy and excitement, as people went about their daily routines, working hard to make ends meet.
The city was a melting pot of cultures, where people from all over the world had come to start anew. They brought with them their traditions, their customs, and their dreams, creating a unique tapestry that made up the very essence of New York City.
One day, as the sun set over the skyscrapers, a young woman named Maya stepped off the subway, her eyes wide with wonder. She had just moved to the city from a small town in the midwest, and she couldn't believe her luck at having landed a job as a journalist for one of the biggest news organizations in the city.
As she made her way through the crowded streets, she took in everything around her - the sounds of horns honking, the smells of street food, the sight of people from all races and ethnicities living side by side. It was a world unlike anything she had ever experienced before, and it both excited and scared her.
Maya soon found herself immersed in the fast-paced world of journalism, reporting on everything from local events to national news. She worked long hours, but she loved every moment of it. She had finally found her place in the world, and she knew that this was only the beginning of her journey.
Over the next few years, Maya continued to explore the city, uncovering its secrets and sharing its stories with the world. She met people from all walks of life, each one leaving an indelible mark on her heart.
And even as the city changed and grew, Maya remained forever enchanted by its magic. For she had discovered something truly special about New York City: no matter how big or bright the world became, it would always be her home.</s>
``` |
core-3/kuno-royale-v3-7b | core-3 | 2024-03-08T03:01:36Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3",
"license:cc-by-nc-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-06T14:55:59Z | ---
license: cc-by-nc-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
model-index:
- name: kuno-royale-v3-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.23
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.13
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v3-7b
name: Open LLM Leaderboard
---
# kuno-royale-v3-7b
Another experimental combination of eren23's ongo-monarch-jaskier merges and Kunoichi-DPO-v2-7B. Untested.
kuno-royale-v3-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "core-3/kuno-royale-v3-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_core-3__kuno-royale-v3-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.88|
|AI2 Reasoning Challenge (25-Shot)|71.76|
|HellaSwag (10-Shot) |88.23|
|MMLU (5-Shot) |65.06|
|TruthfulQA (0-shot) |71.13|
|Winogrande (5-shot) |82.32|
|GSM8k (5-shot) |70.81|
|
mobidic/solar-10b-platypus-lora | mobidic | 2024-03-06T20:11:14Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-06T19:47:56Z | ---
library_name: transformers
tags: []
license: cc-by-nc-nd-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** mobidic
- **Model type:** language generation
- **License:** cc-by-nc-nd-4.0
- **Finetuned from model [optional]:** solar-10B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** mobidic/solar-10b-platypus-lora
|
arshadshk/Mistral-Hinglish-7B-Instruct-v0.2 | arshadshk | 2024-03-07T14:28:00Z | 736 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"lora",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T13:26:54Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
- lora
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
## Training Details
30k chat sessions with a total of <= 1024 tokens were selected from the [sarvamai/samvaad-hi-v1](https://huggingface.co/datasets/sarvamai/samvaad-hi-v1) dataset, with 2k sessions reserved for the test set. The Lora adapter is utilized and fine-tuned using SFT TRL.
Test set loss:
| Model | Loss |
|-----------------------|------|
| Mistral-Hinglish-Instruct | 0.8 |
| Mistral-Instruct | 1.8 |
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("arshadshk/Mistral-Hinglish-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("arshadshk/Mistral-Hinglish-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
|
automerger/OgnoExperiment27-7B | automerger | 2024-03-10T17:41:31Z | 736 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"base_model:yam-peleg/Experiment27-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-08T17:37:48Z | ---
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment27-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2)
* [yam-peleg/Experiment27-7B](https://huggingface.co/yam-peleg/Experiment27-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
layer_range: [0, 32]
- model: yam-peleg/Experiment27-7B
layer_range: [0, 32]
merge_method: slerp
base_model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
``` |
Chickaboo/ChickaQ | Chickaboo | 2024-03-21T23:18:22Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:vilm/Quyen-SE-v0.1",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-09T01:53:41Z | ---
base_model:
- vilm/Quyen-SE-v0.1
- Qwen/Qwen1.5-0.5B-Chat
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# Models in the ChickaQ family
- **ChickaQ (0.5B)**
- **ChickaQ-Large (1.8B)**
- **ChickaQ-V2-Beta (0.9B)**
- **ChickaQ-V2-Large-Beta (3B)**
# mergedmodel
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [vilm/Quyen-SE-v0.1](https://huggingface.co/vilm/Quyen-SE-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: vilm/Quyen-SE-v0.1
# no parameters necessary for base model
- model: Qwen/Qwen1.5-0.5B-Chat
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: vilm/Quyen-SE-v0.1
parameters:
normalize: true
dtype: float16
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.